gt; which results in a new ZK connection. There have certainly been bugs
> like that in the past (speaking generally, not specifically).
>
> On 6/1/20 5:59 PM, anil gupta wrote:
> > Hi Folks,
> >
> > We are running in HBase problems due to hitting the limit of ZK
> &g
these also created by hbase clients/apps(my guess is NO)? How can i
calculate optimal value of maxClientCnxns for my cluster/usage?
--
Thanks & Regards,
Anil Gupta
Cloned table and snapshots should not have any impact if you drop source table.
Sent from my iPhone
> On Nov 28, 2018, at 5:23 PM, William Shen wrote:
>
> Hi,
>
> I understand that changes made to the tables cloned using snapshot will not
> affect the snapshot nor the source data table the sn
nd run a major compaction on t2, will I
>> see the decrease in table size for t2? If I compare the size of t2 and t1,
>> I should see a smaller size for t2?
>>
>> Thanks.
>>
>> Antonio.
>>
>>> On Sun, Aug 26, 2018 at 3:33 PM Anil Gupta wrote:
&g
You will need to do major compaction on table for the table to clean/delete up
extra version.
Btw, 18000 max version is a unusually high value.
Are you using hbase on s3 or hbase on hdfs?
Sent from my iPhone
> On Aug 26, 2018, at 2:34 PM, Antonio Si wrote:
>
> Hello,
>
> I have a hbase ta
oupInformation.setLoginUser(userGroupInformation);
>
> I am getting bellow errors,
>
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after
> attempts=36, exceptions: Mon Jul 09 18:45:57 IST 2018, null,
> java.net.SocketTimeoutException: callTimeout=6, callDuration=64965:
> row
> '' on table 'DEMO_TABLE' at
> region=DEMO_TABLE,,1529819280641.40f0e7dc4159937619da237915be8b11.,
> hostname=dn1-devup.mstorm.com,60020,1531051433899, seqNum=526190
>
> Exception : java.io.IOException: Failed to get result within timeout,
> timeout=6ms
>
>
> --
> Regards,
> Lalit Jadhav
> Network Component Private Limited.
>
--
Thanks & Regards,
Anil Gupta
It seems you might have a write hotspot.
Are your writes evenly distributed across the cluster? Do you have more than
15-20 regions for that table?
Sent from my iPhone
> On May 22, 2018, at 9:52 PM, Kang Minwoo wrote:
>
> I think hbase flush is too slow.
> so memstore reached upper limit.
>
compaction usually takes care of it. If you want very high locality from
beginning then you can run a major compaction on new table after your
initial load.
HTH,
Anil Gupta
On Mon, Feb 19, 2018 at 11:46 PM, Marcell Ortutay
wrote:
> I have a large HBase table (~10 TB) that has an existing key struct
t 10:11 AM, Ted Yu wrote:
> You can cleanup oldwal directory beginning with oldest data.
>
> Please open support case with the vendor.
>
> On Sat, Feb 10, 2018 at 10:02 AM, anil gupta
> wrote:
>
> > Hi Ted,
> >
> > We cleaned up all the snaphsots around
gt; after 2018-02-07 09:10:08 ?
>
> Do you see CorruptedSnapshotException for file outside of
> /apps/hbase/data/.hbase-snapshot/.tmp/ ?
>
> Cheers
>
--
Thanks & Regards,
Anil Gupta
t; Please the first few review comments of HBASE-16464.
>
> You can sideline the corrupt snapshots (according to master log).
>
> You can also contact the vendor for a HOTFIX.
>
> Cheers
>
> On Sat, Feb 10, 2018 at 8:13 AM, anil gupta wrote:
>
> > Hi Folks,
> >
ystem.java:767)
at
org.apache.hadoop.hbase.snapshot.SnapshotDescriptionUtils.readSnapshotInfo(SnapshotDescriptionUtils.java:306)
... 26 more
--
Thanks & Regards,
Anil Gupta
t; 2.3.4 is really old - please upgrade to 2.6.3
>
> You should consider asking on the vendor's community forum.
>
> Cheers
>
> On Thu, Feb 8, 2018 at 3:06 PM, anil gupta wrote:
>
> > Hi Folks,
> >
> > We are running a 60 Node MapReduce/HBase HDP cluster. H
ling around but cant find anything
concrete to fix this problem. Currently, 15/60 nodes are already down in
last 2 days.
Can someone please point out what might be causing these RegionServer
failures?
--
Thanks & Regards,
Anil Gupta
t; >> > Manjeet Singh
> > > >> >
> > > >> >
> > > >> >
> > > >> >
> > > >> >
> > > >> > Hi All,
> > > >> >
> > > >> > I have query regarding hbase data migration from one cluster to
> > > another
> > > >> > cluster in same N/W, but with a different version of hbase one is
> > > >> 0.94.27
> > > >> > (source cluster hbase) and another is destination cluster hbase
> > > version
> > > >> is
> > > >> > 1.2.1.
> > > >> >
> > > >> > I have used below command to take backup of hbase table on source
> > > >> cluster
> > > >> > is:
> > > >> > ./hbase org.apache.hadoop.hbase.mapreduce.Export SPDBRebuild
> > > >> > /data/backupData/
> > > >> >
> > > >> > below files were genrated by using above command:-
> > > >> >
> > > >> >
> > > >> > drwxr-xr-x 3 root root4096 Dec 9 2016 _logs
> > > >> > -rw-r--r-- 1 root root 788227695 Dec 16 2016 part-m-0
> > > >> > -rw-r--r-- 1 root root 1098757026 Dec 16 2016 part-m-1
> > > >> > -rw-r--r-- 1 root root 906973626 Dec 16 2016 part-m-2
> > > >> > -rw-r--r-- 1 root root 1981769314 Dec 16 2016 part-m-3
> > > >> > -rw-r--r-- 1 root root 2099785782 Dec 16 2016 part-m-4
> > > >> > -rw-r--r-- 1 root root 4118835540 Dec 16 2016 part-m-5
> > > >> > -rw-r--r-- 1 root root 14217981341 Dec 16 2016 part-m-6
> > > >> > -rw-r--r-- 1 root root 0 Dec 16 2016 _SUCCESS
> > > >> >
> > > >> >
> > > >> > in order to restore these files I am assuming I have to move these
> > > >> files in
> > > >> > destination cluster and have to run below command
> > > >> >
> > > >> > hbase org.apache.hadoop.hbase.mapreduce.Import
> > > >> > /data/backupData/
> > > >> >
> > > >> > Please suggest if I am on correct direction, second if anyone have
> > > >> another
> > > >> > option.
> > > >> > I have tryed this with test data but above command took very long
> > time
> > > >> and
> > > >> > at end it gets fails
> > > >> >
> > > >> > 17/10/23 11:54:21 INFO mapred.JobClient: map 0% reduce 0%
> > > >> > 17/10/23 12:04:24 INFO mapred.JobClient: Task Id :
> > > >> > attempt_201710131340_0355_m_02_0, Status : FAILED
> > > >> > Task attempt_201710131340_0355_m_02_0 failed to report status
> > for
> > > >> 600
> > > >> > seconds. Killing!
> > > >> >
> > > >> >
> > > >> > Thanks
> > > >> > Manjeet Singh
> > > >> >
> > > >> >
> > > >> >
> > > >> >
> > > >> >
> > > >> >
> > > >> > --
> > > >> > luv all
> > > >> >
> > > >>
> > > >
> > > >
> > > >
> > > > --
> > > > luv all
> > > >
> > >
> > >
> > >
> > > --
> > > luv all
> > >
> >
> --
>
>
> -- Enrico Olivelli
>
--
Thanks & Regards,
Anil Gupta
leted hbase data with " hdfs dfs -rmr -skipTrash /hbase",
>
> Is there any way to recovery the deleted date?
>
> Thanks a lot!
>
--
Thanks & Regards,
Anil Gupta
line 106:
>
> checkClosed();
>
> if (off < 0 || len < 0 || off > b.length - len) {
> throw new ArrayIndexOutOfBoundsException();
>
> You didn't get ArrayIndexOutOfBoundsException - maybe b was null ?
>
> On Thu, Jul 6, 2017 at 2:08 PM, anil gupta
read
>
> Do you see similar line in region server log ?
>
> Cheers
>
> On Thu, Jul 6, 2017 at 1:48 PM, anil gupta wrote:
>
> > Hi All,
> >
> > We are running HBase/Phoenix on EMR5.2(HBase1.2.3 and Phoenix4.7) and we
> running into following exception whe
wiping out this table and rebuilding the dataset. We
tried to drop the table and recreate the table but it didnt fix it.
Can anyone please let us know how can we get rid of above problem? Are
we running into https://issues.apache.org/jira/browse/HBASE-16960?
--
Thanks & Regards,
Anil Gupta
Cross posting since this seems to be an HBase issue.
I think completeBulkLoad step is failing. Please refer to the mail below.
-- Forwarded message --
From: anil gupta
Date: Thu, May 25, 2017 at 4:38 PM
Subject: [IndexTool NOT working] mapreduce.LoadIncrementalHFiles: Split
mailing list if you are stuck.
> > >
> > > On Fri, May 12, 2017 at 7:59 AM, F. T. wrote:
> > >
> > > > Hi all,
> > > >
> > > > I'd like to use MOB in HBase to store PDF files. I'm using Hbase
> 1.2.3
> > > but
> > > > I'get this error creating a table with MOB column : NameError:
> > > > uninitialized constant IS_MOB.
> > > >
> > > > A lot of web sites (including Apache official web site) talk about
> the
> > > > patch 11339 or HBase 2.0.0, but, I don't find any explanation about
> the
> > > way
> > > > to install this patch and
> > > >
> > > > I can't find the 2.0.0 version anywhere. So I'm completly lost. Could
> > you
> > > > help me please ?
> > > >
> > > >
> > >
> >
>
--
Thanks & Regards,
Anil Gupta
conjunction with other programs running on that machine, this
>>> sometimes
>>> leads to an "overload" situation.
>>>
>>> Is there a way to keep thread pool usage limited - or in some closer
>>> relation with the actual concurrency required?
>>>
>>> Thanks,
>>>
>>> Henning
>>>
>>>
>>>
>>>
>
--
Thanks & Regards,
Anil Gupta
component as prefix of your rowkey?
On Sun, Oct 23, 2016 at 7:01 PM, Manjeet Singh
wrote:
> Anil its written it can hold lock upto 60 second. In my case my job get
> stuck and many update for same rowkey cause fir bead health of hbase in cdh
> 5.8
>
> On 24 Oct 2016 06:26, &quo
t to update if I found xyz
> record and if I hv few ETL process which are responsible for aggregate the
> data which is very common. ... why my hbase stuck if I try to update same
> rowkey... its mean its hold the lock for few second
>
> On 24 Oct 2016 00:46, "anil gupta
gt; >> > > > On Wed, Aug 17, 2016 at 9:54 AM, Manjeet Singh <
> >> > > manjeet.chand...@gmail.com
> >> > > > >
> >> > > > wrote:
> >> > > >
> >> > > > > Hi All
> >> > > > >
> >> > > > > Can anyone help me about how and in which version of Hbase
> support
> >> > > Rowkey
> >> > > > > lock ?
> >> > > > > I have seen article about rowkey lock but it was about .94
> >> version it
> >> > > > said
> >> > > > > that if row key not exist and any update request come and that
> >> rowkey
> >> > > not
> >> > > > > exist then in this case Hbase hold the lock for 60 sec.
> >> > > > >
> >> > > > > currently I am using Hbase 1.2.2 version
> >> > > > >
> >> > > > > Thanks
> >> > > > > Manjeet
> >> > > > >
> >> > > > >
> >> > > > >
> >> > > > > --
> >> > > > > luv all
> >> > > > >
> >> > > >
> >> > > >
> >> > > >
> >> > > > --
> >> > > > luv all
> >> > > >
> >> > >
> >> > >
> >> > > --
> >> > > -Dima
> >> > >
> >> >
> >> >
> >> >
> >> > --
> >> > luv all
> >> >
> >>
> >>
> >> --
> >> -Dima
> >>
> >
> >
> >
> > --
> > luv all
> >
>
>
>
> --
> luv all
>
--
Thanks & Regards,
Anil Gupta
Hi Frank,
I dont know your exact use case. But, I have successfully run copyTable
across *2 secure* clusters back in 2013-2014 on a CDH distro cluster.
Unfortunately, I dont remember the settings or command that we ran to do
that since it was at my previous job.
Thanks,
Anil Gupta
On Fri, Sep 9
uot;roll their own" based on an Apache
> release is in for some long nights.
>
> On Sunday, August 21, 2016, anil gupta wrote:
>
> > Hi Dima,
> >
> > I was under impression that some CDH5.x GA release shipped MOB. Is that
> > wrong?
> >
> &g
> >
> > > On Saturday, August 20, 2016, Ascot Moss >
> > > ');>>
> > wrote:
> > >
> > > > Hi,
> > > >
> > > > I want to use MOB in Hbase 1.2.2, can anyone advise the step to
> > backport
> > > > MOB to HBase 1.2.2?
> > > >
> > > > Regards
> > > >
> > >
> > >
> > > --
> > > -Dima
> > >
> >
>
>
> --
> -Dima
>
--
Thanks & Regards,
Anil Gupta
top of
HBase. It is ANSI SQL compliant.
Currently Phoenix is officially supported by HDP and it is also present in
cloudera labs.
HTH,
Anil Gupta
On Fri, Jul 8, 2016 at 5:18 AM, Dima Spivak wrote:
> Hey Mahesha,
>
> It might be worthwhile to read through the architecture section o
to gather all columns for a given analysis type, etc. but it
> > would perhaps result in larger column names across billions of rows.
> >
> > e.g. *analysisfoo_4_column1*
> >
> > In practice, is this done and can it perform well? Or is it better to
> pick
> > a fixed width and use some number in its place, that's then translated
> via,
> > say, another table?
> >
> > e.g. *10_1000_10* (or something to that effect -- fixed width
> > numbers that are stand-in ids for potentially longer descriptions).
> >
> > Thanks,
> > - Ken
> >
>
--
Thanks & Regards,
Anil Gupta
Cool, Thanks. Let me send the talk proposal to higher management.
On Wed, Apr 27, 2016 at 8:16 AM, James Taylor
wrote:
> Yes, that sounds great - please let me know when I can add you to the
> agenda.
>
> James
>
> On Tuesday, April 26, 2016, Anil Gupta wrote:
>
>
Hi James,
I spoke to my manager and he is fine with the idea of giving the talk. Now, he
is gonna ask higher management for final approval. I am assuming there is still
a slot for my talk in use case srction. I should go ahead with my approval
process. Correct?
Thanks,
Anil Gupta
Sent from my
vices Providers by
> Forrester Research
> <
> http://www.merkleinc.com/who-we-are-customer-relationship-marketing-agency/awards-recognition/merkle-named-leader-forrester?utm_source=emailfooter&utm_medium=email&utm_campaign=2016MonthlyEmployeeFooter
> >
>
> Forrester Research report names 500friends, a Merkle Company, a leader in
> customer Loyalty Solutions for Midsize Organizations<
> http://www.merkleinc.com/who-we-are-customer-relationship-marketing-agency/awards-recognition/500friends-merkle-company-named?utm_source=emailfooter&utm_medium=email&utm_campaign=2016MonthlyEmployeeFooter
> >
> This email and any attachments transmitted with it are intended for use by
> the intended recipient(s) only. If you have received this email in error,
> please notify the sender immediately and then delete it. If you are not the
> intended recipient, you must not keep, use, disclose, copy or distribute
> this email without the author’s prior permission. We take precautions to
> minimize the risk of transmitting software viruses, but we advise you to
> perform your own virus checks on any attachment to this message. We cannot
> accept liability for any loss or damage caused by software viruses. The
> information contained in this communication may be confidential and may be
> subject to the attorney-client privilege.
>
--
Thanks & Regards,
Anil Gupta
ully resolve this
> > issue.
> >
> >
> >
> > --
> > Talat UYARER
> > Websitesi: http://talat.uyarer.com
> > Twitter: http://twitter.com/talatuyarer
> > Linkedin: http://tr.linkedin.com/pub/talat-uyarer/10/142/304
> >
>
--
Thanks & Regards,
Anil Gupta
>
> http://mail-archives.apache.org/mod_mbox/phoenix-user/
>
> On Tue, Mar 8, 2016 at 4:54 PM, anil gupta wrote:
>
> > Hi,
> >
> > One of our ruby apps might be using this ruby gem(
> > https://rubygems.org/gems/ruby-phoenix/versions/0.0.8) to query
> Phoenix. I
&
Oh my bad. I m on wrong mailing list. Didn't notice my mistake. Thanks for
the reminder, Stack.
On Tue, Mar 8, 2016 at 5:10 PM, Stack wrote:
> On Tue, Mar 8, 2016 at 4:57 PM, anil gupta wrote:
>
> > Yeah, i have looked at that. Non-commercial only provides very basic
> &
Phoenix out of the box.
On Sat, Mar 5, 2016 at 12:04 PM, Rohit Jain wrote:
> You probably already looked at dbVisualizer
>
> Rohit
>
> On Mar 5, 2016, at 1:25 PM, anil gupta wrote:
>
> Hi,
>
> I have been using SquirrelSql to query Phoenix. For oracle/sql server, i
>
Phoenix4.4
with a ruby gem of Phoenix4.2? If not, then what we would need to
do?(upgrade ruby gem to Phoenix4.4?)
Here is the git: https://github.com/wxianfeng/ruby-phoenix
--
Thanks & Regards,
Anil Gupta
ssful. Has anyone being successful.
I would like to know what other Database browser tools people are using to
connect.
--
Thanks & Regards,
Anil Gupta
PS: I would prefer to use Database browser tools to query a database that
itself has Apache License. :)
Also came across this: https://issues.apache.org/jira/browse/HBASE-6790
HBASE-6790 is also unresolved.
On Sun, Feb 28, 2016 at 10:26 PM, anil gupta wrote:
> Hi,
>
> A non java app would like to use AggregateImplementation(
> https://hbase.apache.org/devapidocs/org/apache/
, can you also
tell me how to make calls.
I came across this: https://issues.apache.org/jira/browse/HBASE-5600 . But,
its unresolved.
--
Thanks & Regards,
Anil Gupta
If its possible to make the timestamps as a suffix of your rowkey(assuming the
rowkey is composite) then you would not run into read/write hotspots.
Have a look at open tsdb data model that scales really really well.
Sent from my iPhone
> On Feb 21, 2016, at 10:28 AM, Stephen Durfey wrote:
>
I dont think there is any atomic operations in hbase to support ddl across 2
tables.
But, maybe you can use hbase snapshots.
1.Create a hbase snapshot.
2.Truncate the table.
3.Write data to the table.
4.Create a table from snapshot taken in step #1 as table_old.
Now you have two tables. One with
, phoenix.upsert.batch.size is 1000. Hence, the commits were
failing with a huge batch size of 1000.
Thanks,
Anil Gupta
On Sun, Feb 14, 2016 at 8:03 PM, Heng Chen wrote:
> I am not sure whether "upsert batch size in phoenix" equals HBase Client
> batch puts size or not.
>
> But
My phoenix upsert batch size is 50. You mean to say that 50 is also a lot?
However, AsyncProcess is complaining about 2000 actions.
I tried with upsert batch size of 5 also. But it didnt help.
On Sun, Feb 14, 2016 at 7:37 PM, anil gupta wrote:
> My phoenix upsert batch size is 50. You mean
ain]
> org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> actions to finish
>
> It means your writes are too many, please decrease the batch size of your
> puts, and balance your requests on each RS.
>
> 2016-02-15 4:53 GMT+08:00 anil gupta :
>
> > After a while
would
not timeout in 18ms
On Sun, Feb 14, 2016 at 12:44 PM, anil gupta wrote:
> Hi,
>
> We are using phoenix4.4, hbase 1.1(hdp2.3.4).
> I have a MR job that is using PhoenixOutputFormat. My job keeps on failing
> due to following error:
>
> 2016-02-14 12
2000
actions to finish
I have never seen anything like this. Can anyone give me pointers about
this problem?
--
Thanks & Regards,
Anil Gupta
hrift API you access HBase
>> RegionServer through the single gateway server,
>> when you use Java API - you access Region Server directly.
>> Java API is much more scalable.
>>
>> -Vlad
>>
>>> On Tue, Jan 12, 2016 at 7:36 AM, Anil Gupta wrote:
>>>
Hey Serega,
Have you tried using Java API of HBase to create table? IMO, invoking a
shell script from java program to create a table might not be the most
elegant way.
Have a look at
https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/client/HBaseAdmin.html
HTH,
Anil Gupta
On Wed, Jan
Java api should be same or better in performance as compared to Thrift api.
With Thrift api there is an extra hop. So, most of the time java api would be
better for performance.
Sent from my iPhone
> On Jan 12, 2016, at 4:29 AM, Rajeshkumar J
> wrote:
>
> Hi,
>
> I am currently accessing r
>I have about 10 million rows with each rows having more than 10k
> columns. I need to query this table based on row key and which will be the
> apt query process for this
>
> Thanks
>
> On Fri, Dec 18, 2015 at 5:43 PM, anil gupta wrote:
>
> > Hi RajeshKumar,
> &
ithin one or two seconds. Help me to choose
> which type of scan do I have to use for this - range scan or rowfilter scan
>
> Thanks
>
--
Thanks & Regards,
Anil Gupta
Update: We tried and it worked.
On Thu, Oct 29, 2015 at 1:24 PM, anil gupta wrote:
> Hi Ted,
>
> So, as per the jira, answer to my question is YES.
> We are running HDP2.3.0. That jira got fixed in 0.98.1. So, we should be
> fine.
>
> Thanks,
> Anil Gupta
>
> On
Hi Ted,
So, as per the jira, answer to my question is YES.
We are running HDP2.3.0. That jira got fixed in 0.98.1. So, we should be
fine.
Thanks,
Anil Gupta
On Thu, Oct 29, 2015 at 12:27 PM, Ted Yu wrote:
> Please take a look at:
> https://issues.apache.org/jira/browse/HBASE-8751
>
test this hypothesis.
So, i would like to confirm this on mailing list. Please let me know.
--
Thanks & Regards,
Anil Gupta
ot related to the Maven site
>>>> styling.
>>>>
>>>>> On Thu, Oct 29, 2015 at 4:13 AM, Stack wrote:
>>>>>
>>>>> It looks lovely on a nexus (smile).
>>>>>
>>>>> Site looks good to me
yjones.github.io/hbase/index.html. Note the Github ribbon
> >> and the Google site search. I'm curious to know what you think.
> >>
> >> I also put the 0.94 docs menu as a submenu of the Documentation menu, to
> >> see how it looked.
> >>
> >> Thanks,
> >> Misty
> >>
> >
> >
>
--
Thanks & Regards,
Anil Gupta
adable format.
>
> To check the peer-state value you can use zk_dump command in hbase shell or
> from web UI.
>
> Did you find any errors in the RS logs for replication ?
>
> Regards,
> Ashish Singhi
>
> On Wed, Oct 14, 2015 at 5:04 AM, anil gupta wrote:
>
> > I
Hi,
As far as i know, export snapshot from 0.98 ->1.0 should work.
Maybe, you can verify this by creating a test table, putting couple of rows
in it, export a snapshot of that table, and clone exported snapshot on
remote cluster.
Thanks,
Anil Gupta
On Sat, Oct 17, 2015 at 12:30
Created this: https://issues.apache.org/jira/browse/HBASE-14612
On Wed, Oct 14, 2015 at 10:18 PM, anil gupta wrote:
> Hi Samir,
>
> You are right. But, HBase documentation didnt mention strict requirement
> of correct hbase directory. So, i have to do few more trials to come up
&g
te
> snapshot directories on destination cluster and move data to correct
> locations.
>
> Regards
> Samir
>
> On Wed, Oct 14, 2015 at 9:10 PM, anil gupta wrote:
>
> > I am using 0.98. I used that doc instructions to export the snapshot.
> What
> > do you mean by
hots.
>
> Regards
> Samir
>
> On Wed, Oct 14, 2015 at 8:25 PM, anil gupta wrote:
>
> > I dont see the snapshot when i run "list_snapshot" on destination
> > cluster.(i checked that initially but forgot to mention in my post)
> > Is it supposed to be l
you see snapshot on remote cluster? If you can see snapshot you can use
> clone snapshot command from hbase shell to create table.
> Regards
> Samir
> On Oct 14, 2015 6:38 PM, "anil gupta" wrote:
>
> > Hi,
> >
> > I exported snapshot of a table to remote
find steps to accomplish my task. Can anyone provide me the steps or point
me to documentation?
--
Thanks & Regards,
Anil Gupta
[]
Can anyone tell me what is probably going on?
On Tue, Oct 13, 2015 at 3:56 PM, anil gupta wrote:
> Hi All,
>
> I am using HBase 0.98(HDP2.2).
> As per the documentation here:
>
> http://www.cloudera.com/content/cloudera/en/documentation/cdh4/v4-3-1/CDH4-Installation-Guide/cdh4i
> start_replication
NameError: undefined local variable or method `start_replication' for
#
Is start_replication not a valid command in HBase0.98? If its deprecated
then what is the alternate command?
--
Thanks & Regards,
Anil Gupta
Hi Liren,
In short, adding new columns will *not* trigger compaction.
THanks,
Anil Gupta
On Sat, Oct 10, 2015 at 9:20 PM, Liren Ding
wrote:
> Thanks Ted. So far I don't see direct answer yet in any hbase books or
> articles. all resources say that values are ordered by rowke
Hi Nicolas,
For a table with 5k regions, it should not take more than 10 min for alter
table operations.
Also, in HBase 1.0+, alter table operations does not require disabling the
table. So, you are encouraged to upgrade.
Sent from my iPhone
> On Oct 9, 2015, at 1:15 AM, Nicolae Marasoiu
> w
Hi Akmal,
It will be better if you use name service value. You will not need to worry
about which NN is active. I believe you can find that property in Hadoop's
core-site.xml file.
Sent from my iPhone
On Sep 24, 2015, at 7:23 AM, Akmal Abbasov wrote:
>> My suggestion is different. You shoul
How many rows are expected?
Can you do sanity checking in your data to make sure there are no duplicate
rowkeys?
Sent from my iPhone
> On Sep 22, 2015, at 8:35 AM, OM PARKASH Nain
> wrote:
>
> I using two methods for row count:
>
> hbase shell:
>
> count "Table1"
>
> another is:
>
> hbase
urce)
>at sun.security.jgss.GSSContextImpl.initSecContext(Unknown Source)
>at sun.security.jgss.GSSContextImpl.initSecContext(Unknown Source)
>... 19 more
> 2015-08-31 10:15:27,911 WARN [regionserver60020]
> regionserver.HRegionServer: reportForDuty failed; sleepin
ault principal: testuser@REALM
> > >
> > > Valid starting ExpiresService principal
> > > 08/21/15 09:39:33 08/22/15 09:39:33 krbtgt/REALM@REALM
> > > renew until 08/21/15 09:39:33
> > >
> > >
> > > Loïc CHANEL
&g
rootdir
> > > is
> > > > a shared file system, there shouldn't be any data movement with a
> > region
> > > > reassignment, correct? I'm running into performance issues where
> region
> > > > assignment takes a very long time and I'm trying to figure out why.
> > > >
> > > > Thanks!
> > > >
> > >
> >
>
--
Thanks & Regards,
Anil Gupta
ipc.RpcServer: Have read input token of size 0 for processing by
> >>> > saslServer.evaluateResponse()
> >>> > 2015-08-20 13:50:12,707 DEBUG [RpcServer.reader=2,port=6]
> >>> > ipc.RpcServer: Will send token of size 32 from saslServer.
> >>> > 2015-08-20 13:50:12,708 DEBUG [RpcServer.reader=2,port=6]
> >>> > ipc.RpcServer: RpcServer.listener,port=6: DISCONNECTING client
> >>> > 192.168.6.148:43014 because read count=-1. Number of active
> >>> connections: 3
> >>> >
> >>> > Do anyone has an idea about where this might come from, or how to
> >>> solve it
> >>> > ? Because I couldn't find much documentation about this.
> >>> > Thanks in advance for your help !
> >>> >
> >>> >
> >>> > Loïc
> >>> >
> >>> > Loïc CHANEL
> >>> > Engineering student at TELECOM Nancy
> >>> > Trainee at Worldline - Villeurbanne
> >>> >
> >>>
> >>
> >>
> >
>
--
Thanks & Regards,
Anil Gupta
?
> > > > > > > Is
> > > > > > > > > > there any limit on key size only ?
> > > > > > > > > > 2.Access pattern is mostly on key based only- Is
> memstores
> > > and
> > > > > > > regions
> > > > > > > > > on a
> > > > > > > > > > regionserver are per table basis? Is it if I have
> multiple
> > > > tables
> > > > > > it
> > > > > > > > will
> > > > > > > > > > have multiple memstores instead of few if it would have
> > been
> > > > one
> > > > > > > large
> > > > > > > > > > table ?
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > On Mon, Aug 17, 2015 at 7:29 PM, Ted Yu <
> > yuzhih...@gmail.com
> > > >
> > > > > > wrote:
> > > > > > > > > >
> > > > > > > > > > > For #1, take a look at the following in
> > hbase-default.xml :
> > > > > > > > > > >
> > > > > > > > > > > hbase.client.keyvalue.maxsize
> > > > > > > > > > > 10485760
> > > > > > > > > > >
> > > > > > > > > > > For #2, it would be easier to answer if you can outline
> > > > access
> > > > > > > > patterns
> > > > > > > > > > in
> > > > > > > > > > > your app.
> > > > > > > > > > >
> > > > > > > > > > > For #3, adjustment according to current region
> boundaries
> > > is
> > > > > done
> > > > > > > > > client
> > > > > > > > > > > side. Take a look at the javadoc for LoadQueueItem
> > > > > > > > > > > in LoadIncrementalHFiles.java
> > > > > > > > > > >
> > > > > > > > > > > Cheers
> > > > > > > > > > >
> > > > > > > > > > > On Mon, Aug 17, 2015 at 6:45 AM, Shushant Arora <
> > > > > > > > > > shushantaror...@gmail.com
> > > > > > > > > > > >
> > > > > > > > > > > wrote:
> > > > > > > > > > >
> > > > > > > > > > > > 1.Is there any max limit on key size of hbase table.
> > > > > > > > > > > > 2.Is multiple small tables vs one large table which
> one
> > > is
> > > > > > > > preferred.
> > > > > > > > > > > > 3.for bulk load -when LoadIncremantalHfile is run it
> > > again
> > > > > > > > > > recalculates
> > > > > > > > > > > > the region splits based on region boundary - is this
> > > > division
> > > > > > > > happens
> > > > > > > > > > on
> > > > > > > > > > > > client side or server side again at region server or
> > > hbase
> > > > > > master
> > > > > > > > and
> > > > > > > > > > > then
> > > > > > > > > > > > it assigns the splits which cross target region
> > boundary
> > > to
> > > > > > > desired
> > > > > > > > > > > > regionserver.
> > > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
--
Thanks & Regards,
Anil Gupta
uding some custom ones like statistical unique
> > counts.
> >
> > I noticed that available tooling with coprocessors, like
> > ColumnAggregationProtocol, involve just one metric e.g. one sum(column).
> We
> > collect many, and of course it is more efficient to scan the data once.
> >
> > Please advise,
> > Nicu
> >
>
--
Thanks & Regards,
Anil Gupta
ons (selectable by way of the Accept
> > header, e.g. Accept: text/plain). If you'd like to propose a patch we'd
> > certainly look at it.
> >
> > Thanks.
> >
> >
> > On Wed, Aug 5, 2015 at 12:51 AM, anil gupta
> wrote:
> >
> >>
willing to work on this.
Would HBase accept a patch?
Thanks,
Anil Gupta
On Fri, Jul 17, 2015 at 4:57 PM, Andrew Purtell wrote:
>
>
> The closest you can get to just a string is have your client use an accept
> header of "Accept: application/octet-stream" with making
/HBASE-13140
>
> (there's also a workaround provided)
>
> On Thu, Jul 30, 2015 at 12:23 PM, anil gupta
> wrote:
>
> > http://hbase.apache.org/apidocs/index.html
> >
> > Above link refers to HBase2.0 docs and another link on our website refers
> &g
http://hbase.apache.org/apidocs/index.html
Above link refers to HBase2.0 docs and another link on our website refers
to 0.94. So, there is no way to reach to 0.98,1.0 or 1.1
On Thu, Jul 30, 2015 at 10:18 AM, anil gupta wrote:
> Hi All,
>
> Since we are talking about HBase documentati
> wrote:
> > >
> > > > While I like the new and better layout of the book it is painful to
> > use -
> > > > at least for me - because of its size.
> > > >
> > > >
> > > I've started to notice this too. It'd be sweet if it loaded more
> > promptly.
> > > Thanks for starting the discussion.
> > > St.Ack
> > >
> >
>
--
Thanks & Regards,
Anil Gupta
Hi All,
We have a String Rowkey. We have String values of cells.
Still, Stargate returns the data with Base64 encoding due to which a user
cant read the data. Is there a way to disable Base64 encoding and then Rest
request would just return Strings.
--
Thanks & Regards,
Anil Gupta
- +91 8600011455
>
> On Wed, Jul 15, 2015 at 12:40 PM, anil gupta
> wrote:
>
> > Using coprocessor to make calls to other Tables or remote Regions is an
> > ANTI-PATTERN. It will create cyclic dependency between RS in your
> cluster.
> > Coprocessors should be st
I think this is a duplicate post. Please avoid posting same questions. Please
use previous thread where I replied.
Sent from my iPhone
> On Jul 14, 2015, at 11:17 PM, Chandrashekhar Kotekar
> wrote:
>
> Hi,
>
> REST APIs of my project make 2-3 calls to different tables in HBase. These
> cal
denormalizing the data and then just doing ONE call? Now, this
becomes more of a data modeling question.
Thanks,
Anil Gupta
On Tue, Jul 14, 2015 at 11:39 PM, Chandrashekhar Kotekar <
shekhar.kote...@gmail.com> wrote:
> Hi,
>
> REST APIs of my project make 2-3 calls to different
> >>>>>>>
> > > > > > > > >>>>>>> Sent from phone. Excuse typos.
> > > > > > > > >>>>>>> On Jun 5, 2015 6:00 PM, "mukund murrali" <
> > > > > > > > >> mukundmurra..
gt;> > >>> found No tasks currently running on this node. But we configure
> >> > >>>
> >> > >>>
> >> > >>>
> >> > >>> hbase.regionserver.handler.count
> >> > >>>
> >> > >>> 150
> >> > >>>
> >> > >>> hbase-site.xml
> >> > >>>
> >> > >>>
> >> > >>>
> >> > >>>
> >> > >>>
> >> > >>> And on another cluster using 0.96.0-hadoop2, we can see following
> >> tasks
> >> > >>> under Show All RPC Handler Tasks:
> >> > >>>
> >> > >>>
> >> > >>> Tue Jun 09 17:32:14 CST 2015
> >> > >>>
> >> > >>> RpcServer.handler=9,port=60020
> >> > >>>
> >> > >>> WAITING (since 0sec ago)
> >> > >>>
> >> > >>> Waiting for a call (since 0sec ago)
> >> > >>>
> >> > >>>
> >> > >>> So i want to know if it is a bug? or something i misunderstand?
> >> > >>>
> >> > >>>
> >> > >>> Any idea will be appreciated!
> >> > >>>
> >> > >>
> >> > >>
> >> > >
> >> >
> >>
> >
> >
>
--
Thanks & Regards,
Anil Gupta
would be best to avoid having diff in hardware
in cluster machines.
Thanks,
Anil Gupta
On Wed, Jun 17, 2015 at 5:12 PM, rahul malviya
wrote:
> Hi,
>
> Is it possible to configure HBase to have only fix number of regions per
> node per table in hbase. For example node1 serves 2 regions,
Thanks Stack.
On Wed, Jun 10, 2015 at 8:06 AM, Stack wrote:
> On Mon, Jun 8, 2015 at 10:27 PM, anil gupta wrote:
>
> > So, if we have to match against non-string data in hbase shell. We should
> > always use double quotes?
>
>
> Double-quotes means the shell (ruby)
; > Hi,
> >
> >
> > I can only find documentations for 0.94 version of Hbase at
> > http://hbase.apache.org/0.94/apidocs/index.html,
> > but where can I find the URL for newer version?
> >
> >
> > Thanks
>
>
>
>
> --
> Sean
>
--
Thanks & Regards,
Anil Gupta
ase shell. We
> should always
> use double quotes?
>
> I think so.
>
> bq. Even for matching values of cells?
>
> Did you mean through use of some Filter ?
>
> Cheers
>
> On Mon, Jun 8, 2015 at 10:27 PM, anil gupta wrote:
>
> > So, if we have to match a
uestion) is that 'escape
> sequence' does not work using single quote.
>
> Cheers
>
> On Mon, Jun 8, 2015 at 9:11 PM, anil gupta wrote:
>
> > Hi Jean,
> >
> > My bad. I gave a wrong illustration. This is the query is was trying on
> my
> > compo
t;, etc. and the first row returned start with a "4"
> which is correct given the startrow you have specified.
>
> You seems to have a composite key. And you seems to scan without building
> the composite key. How have you created your table and what is your key
> design?
>
n": false,
"isEasyCareQualified": true}
I specified, startRow='33078'. Then how come this result shows up? What's
going over here?
--
Thanks & Regards,
Anil Gupta
h hadoop ecosystem?,etc
Please explain your use case and share your thoughts after doing some
preliminary reading.
Thanks,
Anil Gupta
On Fri, May 29, 2015 at 12:20 PM, Lukáš Vlček wrote:
> As for the #4 you might be interested in reading
> https://aphyr.com/posts/294-call-me-maybe-cassandra
-a2ff42c8632f
> >> > 2015-05-17 20:39:49,208 ERROR [main] master.HMasterCommandLine: Master
> >> exiting
> >> > java.lang.RuntimeException: Master not active after 30 seconds
> >> >at
> >>
> org.apache.hadoop.hbase.util.JVMClusterUtil.startup(JVMClusterUtil.java:194)
> >> >at
> >>
> org.apache.hadoop.hbase.LocalHBaseCluster.startup(LocalHBaseCluster.java:445)
> >> >at
> >>
> org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:197)
> >> >at
> >>
> org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139)
> >> >at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> >> >at
> >>
> org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
> >> >at
> org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2002)
> >> >
> >> >
> >> > I noticed that this has something to do with the ZooKeeper data. If I
> >> > rm -rf $TMPDIR/hbase-tsuna/zookeeper then I can start HBase again.
> >> > But of course HBase won’t work properly because while some tables
> >> > exist on the filesystem, they no longer exist in ZK, etc.
> >> >
> >> > Does anybody know what could be left behind in ZK that could make it
> >> > hang during startup? I looked at a jstack output while it was paused
> >> > during 30s and didn’t find anything noteworthy.
> >> >
> >> > --
> >> > Benoit "tsuna" Sigoure
> >>
>
>
>
> --
> Benoit "tsuna" Sigoure
>
--
Thanks & Regards,
Anil Gupta
;> > difference between the last table copy and data that has come in
> since?
> >> >
> >> >
> >> >
> >> > > This provides a great rollback strategy, and with our existing
> >> in-house
> >> > > cluster cloning tools we can minimize the read-only window to a few
> >> > minutes
> >> > > if all goes well.
> >> > >
> >> > > There are a couple gotchas I can think of with the shim, which I'm
> >> hoping
> >> > > some of you might have ideas/opinions on:
> >> > >
> >> > > 1) Since protobufs are used for communication, we will have to avoid
> >> > > shading those particular classes as they need to match the
> >> > > package/classnames on the server side. I think this should be fine,
> >> as
> >> > > these are net-new, not conflicting with CDH4 artifacts. Any
> >> > > additions/concerns here?
> >> > >
> >> > >
> >> > CDH4 has pb2.4.1 in it as opposed to pb2.5.0 in cdh5?
> >> >
> >>
> >> If your clients are interacting with HDFS then you need to go the route
> of
> >> shading around PB and its hard, but HBase-wise only HBase 0.98 and 1.0
> use
> >> PBs in the RPC protocol and it shouldn't be any problem as long as you
> >> don't need security (this is mostly because the client does a UGI in the
> >> client and its easy to patch on both 0.94 and 1.0 to avoid to call UGI).
> >> Another option is to move your application to asynchbase and it should
> be
> >> clever enough to handle both HBase versions.
> >>
> >>
> >>
> >> > I myself have little experience going a shading route so have little
> to
> >> > contribute. Can you 'talk out loud' as you try stuff Bryan and if we
> >> can't
> >> > help highlevel, perhaps we can help on specifics.
> >> >
> >> > St.Ack
> >> >
> >>
> >> cheers,
> >> esteban.
> >>
> >
> >
>
--
Thanks & Regards,
Anil Gupta
distributed.
>
> Thanks,
> Rahul
>
>
> On Wed, May 13, 2015 at 10:25 AM, Anil Gupta
> wrote:
>
> > How many mapper/reducers are running per node for this job?
> > Also how many mappers are running as data local mappers?
> > You load/data equally distribu
How many mapper/reducers are running per node for this job?
Also how many mappers are running as data local mappers?
You load/data equally distributed?
Your disk, cpu ratio looks ok.
Sent from my iPhone
> On May 13, 2015, at 10:12 AM, rahul malviya
> wrote:
>
> *The High CPU may be WAIT IOs,
1 - 100 of 400 matches
Mail list logo