Congratulations!!
2016-11-30 8:32 GMT+08:00 Stephen Jiang :
> Congratulations, Phil!
>
> On Tue, Nov 29, 2016 at 2:42 PM, Andrew Purtell wrote:
>
>> Congratulations and welcome, Phil!
>>
>>
>> On Tue, Nov 29, 2016 at 1:49 AM, Duo Zhang wrote:
>>
>> > On behalf of the Apache HBase PMC, I am pleas
The performance looks great!
2016-11-19 18:03 GMT+08:00 Ted Yu :
> Opening a JIRA would be fine.
> This makes it easier for people to obtain the patch(es).
>
> Cheers
>
>> On Nov 18, 2016, at 11:35 PM, Anoop John wrote:
>>
>> Because of some compatibility issues, we decide that this will be done
Congrats! :)
2016-10-16 8:19 GMT+08:00 Jerry He :
> Congratulations, Stephen.
>
> Jerry
>
> On Fri, Oct 14, 2016 at 12:56 PM, Dima Spivak wrote:
>
>> Congrats, Stephen!
>>
>> -Dima
>>
>> On Fri, Oct 14, 2016 at 11:27 AM, Enis Söztutar wrote:
>>
>> > On behalf of the Apache HBase PMC, I am happy
Not sure hbase m7 is which version of hbase in community.
Is your batch load job some kind of bulk load or just call HTable API
to dump data to HBase?
2016-09-22 14:30 GMT+08:00 Dima Spivak :
> Hey Deepak,
>
> Assuming I understand your question, I think you'd be better served
> reaching out to
OH! congrats, DUO!
2016-09-07 12:26 GMT+08:00 Stack :
> On behalf of the Apache HBase PMC I am pleased to announce that 张铎
> has accepted our invitation to become a PMC member on the Apache
> HBase project. Duo has healthy notions on where the project should be
> headed and over the last year a
Xms32G, -Xmn4G
>
> Thanks,
> lujinhong
>
> > 在 2016年6月22日,15:53,Heng Chen 写道:
> >
> > How many regions do you have for the table? 8000 qps for one RS or for
> the
> > whole table? What's your java heap size now? and what's your hbase
> &g
op.hbase.ipc.CallRunner.run(CallRunner.java:101)
>
> at
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>
> at java.lang.Thread.run(Thread.java:745)
>
> On Wed, Jun 22, 2016 at 3:50 PM, He
How many regions do you have for the table? 8000 qps for one RS or for the
whole table? What's your java heap size now? and what's your hbase
version?
2016-06-22 12:39 GMT+08:00 jinhong lu :
> I got a cluster of 200 regionserver, and one of the tables is about 3T and
> 5 billion lines. Is it
Could you paste the whole jstack and relates rs log? It seems row write
lock was occupied by some thread. Need more information to find it.
2016-06-22 13:48 GMT+08:00 vishnu rao :
> need some help. this has happened for 2 of my servers
> -
>
> *[B.defaultRpcServer.handler=2,queue=2
>
>
>
>
> At 2016-06-16 18:18:44, "Heng Chen" wrote:
> >bq. if we do not set any user tables IN_MEMORY to true, then the whole
> >hbase just need to cache hbase:meta data to in_memory LruBlockCache.
> >
> >You set blockcache to be false for o
bq. if we do not set any user tables IN_MEMORY to true, then the whole
hbase just need to cache hbase:meta data to in_memory LruBlockCache.
You set blockcache to be false for other tables?
2016-06-16 16:21 GMT+08:00 WangYQ :
> in hbase 0.98.10, if we use LruBlockCache, and set regionServer's max
us a little more of details? like which version of HBase are you
> using and a stack dump from the RS?
>
> cheers,
> esteban.
>
>
> --
> Cloudera, Inc.
>
>
> On Mon, Jun 13, 2016 at 8:35 PM, Heng Chen
> wrote:
>
> > Currently, we found sometimes our RS ha
Currently, we found sometimes our RS handlers were occupied by some big
request. For example, when handlers read one same big block from hdfs
simultaneously, all handlers will wait, except one handler do read block
from hdfs and put it in cache, and other handlers will read the block from
cache.
They have different default values, and according to contact of
HSTORE_OPEN_AND_CLOSE_THREADS_MAX, i should be OK. It just represents the
max limit threads in pool.
/**
* The default number for the max number of threads used for opening and
* closing stores or store files in parallel
*/
publ
Something wrong in snappy Library?
Have you try to not use compression?
2016-06-03 11:13 GMT+08:00 吴国泉wgq :
> HI STACK:
>
>1. The log is very large,so I pick some of it. But it seems not
> provide valuable info.Here is the region named
> qtrace,,1458012479440.dd8f92e3c161a8534b30ab17c28ae8
sh with seqId:" +
> flush.getFlushSequenceNumber());
>
> I searched for them in the log you attached to HBASE-15900 but didn't find
> any occurrence.
>
> FYI
>
> On Mon, May 30, 2016 at 2:59 AM, Heng Chen
> wrote:
>
> > I find something useful.
>
d before set writestate.flushing to be true.
So if region.close wake up in writestate.wait, but the lock acquried by
HRegion.replayWALFlushStartMarker, the flushing will be set to be true
again, and region.close will stuck in writestate.wait forever.
Will it happen in real logical?
2016-05-27 10:
And there is another question about failed close state, does it mean the
region in this state could be read and write normally?
2016-05-26 12:48 GMT+08:00 Heng Chen :
>
> On master web UI, i could see region (c371fb20c372b8edbf54735409ab5c4a)
> always in failed close state, So balan
On master web UI, i could see region (c371fb20c372b8edbf54735409ab5c4a)
always in failed close state, So balancer could not run.
i check the region on RS, and found logs about this region
2016-05-26 12:42:10,490 INFO [MemStoreFlusher.1]
regionserver.MemStoreFlusher: Waited 90447ms on a compac
In my company, we calculate UV/PV offline in batch, and update every day.
If do it online, url + timestamp could be the rowkey.
2016-05-16 18:13 GMT+08:00 齐忠 :
> Yes, like google analytics.
>
> 2016-05-16 17:48 GMT+08:00 Heng Chen :
> > You want to calculate UV/PV online?
>
You want to calculate UV/PV online?
2016-05-16 16:46 GMT+08:00 齐忠 :
> I have very large log(50T per day),
>
> My log event as follows
>
> url,visitid,requesttime
>
> http://www.aaa.com?a=b&c=d&e=f, 1, 1463387380
> http://www.aaa.com?a=b&c=d&e=fa, 1, 1463387280
> http://www.aaa.com?a=b&c=d&e=fa, 2
That's great! We are ready to use SSD to improve read performance now.
2016-04-23 8:25 GMT+08:00 Stack :
> It is well worth the read. It goes deep so is a bit long and I had to cut
> it up to do Apache Blog sized bits. Start reading here:
> https://blogs.apache.org/hbase/entry/hdfs_hsm_and_hbase
evel.
>
> Was 9151f75eaa7d00a81e5001f4744b8b6a among the regions which didn't finish
> split ?
>
> Can you pastebin more of the master log during this period ?
>
> Any other symptom that you observed ?
>
> Cheers
>
> On Thu, Apr 7, 2016 at 12:59 AM, Heng Chen
> wrote:
>
> > T
8eb71ee6477c2ebe
2016-04-07 12:23:54,386 DEBUG [region-location-4]
regionserver.HRegionFileSystem: No StoreFiles for:
hdfs://hdfs-master:8020/hbase/data/default/apolo_pdf/9151f75eaa7d00a81e5001f4744b8b6a/m
2016-04-07 12:25:54,033 DEBUG [region-location-4]
regionserver.HRegionFileSystem: No StoreFile
hi, guys:
i upgrade our cluster recently, after upgrade, i found some wired
problems:
in Master.log, there some a lots of logs like below:
2016-04-07 11:57:00,597 DEBUG [region-location-0]
regionserver.HRegionFileSystem: No StoreFiles for:
hdfs://common-cluster:8020/hbase/data/default/
Is your DN with slow response at that time?
2016-03-23 15:50 GMT+08:00 Anoop John :
> At the same time, any explicit close op happened on the WAL file? Any
> log rolling? Can u check the logs to know this? May be check HDFS
> logs to know abt the close calls to WAL file?
>
> -Anoop-
>
> On Wed
OpenTSDB + 1:)
2016-03-23 11:49 GMT+08:00 Wojciech Indyk :
> Hi Prem!
> Look at OpenTSDB http://opentsdb.net/
> --
> Kind regards/ Pozdrawiam,
> Wojciech Indyk
> http://datacentric.pl
>
>
> 2016-03-07 11:26 GMT+01:00 Prem Yadav :
> > Hi,
> > we have a use case where we need to get the data fo
bq. the table I created by default having only one region
Why not pre-split table into more regions when create it?
2016-03-16 11:38 GMT+08:00 Ted Yu :
> When one region is split into two, both daughter regions are opened on the
> same server where parent region was opened.
>
> Can you provide a
what is your HLogs File num during test, is it always the max number
(IIRC, default is 34?).
How many DNs in your hdfs?
2016-03-09 1:31 GMT+08:00 Frank Luo :
> 0.98
>
> "Light" means not enough to trigger compacts during actively write.
>
> -Original Message-
> From: saint@gmail.co
:41 GMT+08:00 Stack :
> On Wed, Feb 24, 2016 at 3:31 PM, Heng Chen
> wrote:
>
> > The story is I run one MR job on my production cluster (0.98.6), it
> needs
> > to scan one table during map procedure.
> >
> > Because of the heavy load from the job, all my RS
Thanks @ted, your suggestions about 2 and 3 are what i need !
2016-02-25 10:39 GMT+08:00 Heng Chen :
> I pick up some logs in master.log about one region
> "ad283942aff2bba6c0b94ff98a904d1a"
>
>
> 2016-02-24 16:24:35,610 INFO [AM.ZK.Worker-pool2-t3491]
> master.R
gs w.r.t. these two regions so that we
> can have more clue ?
>
> For #2, please see http://hbase.apache.org/book.html#big.cluster.config
>
> For #3, please see
>
> http://hbase.apache.org/book.html#_running_multiple_workloads_on_a_single_cluster
>
> On Wed, Feb 24, 2016 a
The story is I run one MR job on my production cluster (0.98.6), it needs
to scan one table during map procedure.
Because of the heavy load from the job, all my RS crashed due to OOM.
After i restart all RS, i found one problem.
All regions were reopened on one RS, and balancer could not ru
> lot?
> >
> > However, AsyncProcess is complaining about 2000 actions.
> >
> > I tried with upsert batch size of 5 also. But it didnt help.
> >
> >
> > On Sun, Feb 14, 2016 at 6:43 PM, Heng Chen
> > wrote:
> >
> >> 2016-02-14 12:34:23,593
2016-02-14 12:34:23,593 INFO [main]
org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
actions to finish
It means your writes are too many, please decrease the batch size of your
puts, and balance your requests on each RS.
2016-02-15 4:53 GMT+08:00 anil gupta :
> After a while w
I change phoenix lib from 4.6.0 to 4.5.1, logs come back...
2016-02-14 15:27 GMT+08:00 Heng Chen :
> I find some hints, the log seems to be disappear after i install
> phoenix, some suspicious logs below i found
>
> SLF4J: Class path contains multiple SLF4J bindings.
&g
/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.
2016-02-14 15:17 GMT+08:00 Heng Chen :
> This happens after i upgrade my cluster from 0.98 to 1.1
>
>
>
> 2016-02-14 12:47 GMT+08:00 Heng Chen :
>
>> I am n
This happens after i upgrade my cluster from 0.98 to 1.1
2016-02-14 12:47 GMT+08:00 Heng Chen :
> I am not sure why this happens, this is my command
>
> maintain 11444 66.9 1.1 10386988 1485888 pts/0 Sl 12:33 6:30
> /usr/java/jdk/bin/java -Dproc_regionserver -XX:OnOutOfMemo
I am not sure why this happens, this is my command
maintain 11444 66.9 1.1 10386988 1485888 pts/0 Sl 12:33 6:30
/usr/java/jdk/bin/java -Dproc_regionserver -XX:OnOutOfMemoryError=kill -9
%p -Xmx8000m -XX:+UseConcMarkSweepGC -verbose:gc -XX:+PrintGCDetails
-XX:+PrintGCDateStamps -XX:+UseGCLogF
Jan 11, 2016 at 6:52 PM, Heng Chen
> wrote:
>
> > Some relates region log on RS
> >
> >
> > 2016-01-12 10:45:01,570 INFO
> > [PriorityRpcServer.handler=14,queue=0,port=16020]
> > regionserver.RSRpcServices: Open
> >
> >
> PIPE.TABLE_CONFIG,\x0
org.apache.hadoop.hbase.regionserver.HRegion.(HRegion.java:620)
... 15 more
2016-01-12 10:42 GMT+08:00 Heng Chen :
> Information from Web UI
>
> regionstate
>RIT
> 4a5c3511dc0b880d063e56042a7da547
> PIPE.TABLE_CONFIG,\x
> Consider using third party image hosting site.
>
> Pastebinning server log would help.
>
> Cheers
>
> On Mon, Jan 11, 2016 at 6:28 PM, Heng Chen
> wrote:
>
> > [image: 内嵌图片 1]
> >
> >
> > HBASE-1.1.1 hadoop-2.5.0
> >
> >
> > I want to recovery this regions, how? ask for help.
> >
>
[image: 内嵌图片 1]
HBASE-1.1.1 hadoop-2.5.0
I want to recovery this regions, how? ask for help.
@tedyu, should we add something like 'list server table' to list all
regions in one table on some RS.
I found in my practice, it is always needed.
2015-12-04 4:48 GMT+08:00 Ted Yu :
> There is get_splits command but it only shows the splits.
>
> status 'detailed' would show you enough informati
e than 2 minutes to return the results.
>
> Suggest me a way so that I have to bring down this process to 1 to 2
> seconds since I am using it in real-time analytics
>
> Thanks
>
> On Tue, Dec 1, 2015 at 3:40 PM, Heng Chen
> wrote:
>
> > So, maybe we can use 1212 +
So, maybe we can use 1212 + customerId as rowKey.
btw, what is 1212 used for?
2015-12-01 17:49 GMT+08:00 Rajeshkumar J :
> Hi chen,
>
> yes I have customerid column to represent each customers
>
>
>
> On Tue, Dec 1, 2015 at 3:11 PM, Heng Chen
> wrote:
>
> > Hm
gt;
>
> On Tue, Dec 1, 2015 at 1:59 PM, Heng Chen
> wrote:
>
> > Yeah, if you want to get all records about 1212, just scan rows with
> > prefix 1212
> >
> > 2015-12-01 16:27 GMT+08:00 Rajeshkumar J :
> >
> > > so you want me to design row-key value
Yeah, if you want to get all records about 1212, just scan rows with
prefix 1212
2015-12-01 16:27 GMT+08:00 Rajeshkumar J :
> so you want me to design row-key value by appending name column value to
> the rowkey
>
> On Tue, Dec 1, 2015 at 1:19 PM, Heng Chen
> wrote:
&g
12 | | 22
>
> On Tue, Dec 1, 2015 at 12:03 PM, Heng Chen
> wrote:
>
> > why not
> >
> > 1212 | 10, 11, 12, 13, 14, 15, 16, 27, 28 ?
> >
> > 2015-12-01 14:29 GMT+08:00 Rajeshkumar J :
> >
> > > Hi Ted,
> > >
> &
why not
1212 | 10, 11, 12, 13, 14, 15, 16, 27, 28 ?
2015-12-01 14:29 GMT+08:00 Rajeshkumar J :
> Hi Ted,
>
> This is my use case. I have to store values like this is it possible?
>
> RowKey | Values
>
> 1212 | 10,11,12
>
> 1212 | 13, 14, 15
>
> 1212 | 16,27,28
>
> Thanks
>
>
> On Mon, Nov
It cause regionserver down? Oh, could you post some regionserver logs?
2015-11-18 16:22 GMT+08:00 聪聪 <175998...@qq.com>:
> We recently found that regionserver down.Later, we found that because the
> client and server version is not compatible.The client version is
> 1.0,server version is 0.98.6
org.apache.pig.backend.hadoop.hbase.HBaseStorage is in pig project.
*ERROR:pig script failed to validate: java.lang.RuntimeException: could not
instantiate 'org.apache.pig.backend.hadoop.hbase.HBaseStorage' with
arguments.*
This message means the arguments is not correct.
Please check your argum
Oh, it will never happen.
Each put will acquire row lock to guarantee consistency.
2015-11-17 16:20 GMT+08:00 hongbin ma :
> i found a good article
> https://blogs.apache.org/hbase/entry/apache_hbase_internals_locking_and
> which seems to have answered my question.
>
> so the my described scenar
How about this way?
rowkey:Parent1, cf-children: col1-key: Child1Name, col1-value: Child1
Information
col2-key: Child2Name ,
col2-value: child2 information
..
You can get one child information e
54 matches
Mail list logo