On Thu, Apr 5, 2012 at 2:45 PM, shashwat shriparv
wrote:
> I am able to create external tables in hive of HBase, now i have a
> requirement to create an external table which is having variable columns,
> which means the columns in HBase are not fixed for the particular table,
> the no of columns a
The log says that the region server tried to talk to the region server
"dp7.abcd.com" and it timed out after 60 seconds, and that happened
during a split which is pretty bad. As the log says:
org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Abort; we got
an error after point-of-no-retu
Hi Placido,
Sounds like it might be related to HDFS-2379. Try updating to Hadoop
1.0.1 or CDH3u3 and you'll get a fix for that.
You can verify by grepping for "BlockReport" in your DN logs - if the
pauses on the hbase side correlate with long block reports on the DNs,
the upgrade should fix it.
I just did that.
Thanks so much for your help!
Best,
Bing
Methods Missing in HTableInterface
--
Key: HBASE-5728
URL: https://issues.apache.org/jira/browse/HBASE-5728
Project: HBase
Issue Type: Improvement
+1, there are quiet a few missing that should be in there. Please create a JIRA
issue so that we can discuss and agree on which to add.
Lars
On Apr 5, 2012, at 6:23 PM, Stack wrote:
> On Thu, Apr 5, 2012 at 4:20 AM, Bing Li wrote:
>> Dear all,
>>
>> I found some methods existed in HTable were
On Thu, Apr 5, 2012 at 4:20 AM, Bing Li wrote:
> Dear all,
>
> I found some methods existed in HTable were not in HTableInterface.
>
> setAutoFlush
> setWriteBufferSize
> ...
>
Make a patch to add them?
Thanks,
St.Ack
I think you can use cast the interface to HTable, like this:
HTablePool pool = new HTablePool();
HTable table = (HTable)pool.getTable("test");
table.setAutoFlush(true, true);
System.out.println(table.isAutoFlush());
On Thu, Apr 5, 2012 at 7:20 PM, Bing Li wrote:
> Dear all,
>
>
Freudian slip :)
-eran
On Thu, Apr 5, 2012 at 16:52, Ted Yu wrote:
> Thanks for writing back.
>
> I guess you meant 'things are now operating well', below :-)
>
> On Thu, Apr 5, 2012 at 6:25 AM, Eran Kutner wrote:
>
> > As promised I'm writing back to update the list.
> > Seems that after up
Thanks for writing back.
I guess you meant 'things are now operating well', below :-)
On Thu, Apr 5, 2012 at 6:25 AM, Eran Kutner wrote:
> As promised I'm writing back to update the list.
> Seems that after upgrading to cdh3u3 of the hadoop cluster and zookeeper
> ensemble (hadoop alone wasn't
yes i know but it's just an exemple we can do the same exemple with
one billion but effectivelly you could say me in this case the rows
would be stored on all node.
maybe it's not possible to distributed manually the task through the cluster ?
and maybe it's not a good idea but I would like to kn
As promised I'm writing back to update the list.
Seems that after upgrading to cdh3u3 of the hadoop cluster and zookeeper
ensemble (hadoop alone wasn't enough) things are no operating well with no
HDFS errors in the logs. I've also set
hbase.regionserver.logroll.errors.tolerated to 3 just in case.
If you only have 1000 rows, why use MapReduce?
On 4/5/12 6:37 AM, "Arnaud Le-roy" wrote:
>but do you think that i can change the default behavior ?
>
>for exemple i have ten nodes in my cluster and my table is stored only
>on two nodes this table have 1000 rows.
>with the default behavior o
Dear all,
I found some methods existed in HTable were not in HTableInterface.
setAutoFlush
setWriteBufferSize
...
In most cases, I manipulate HBase through HTableInterface from HTablePool.
If I need to use the above methods, how to do that?
I am considering writing my own table pool if
but do you think that i can change the default behavior ?
for exemple i have ten nodes in my cluster and my table is stored only
on two nodes this table have 1000 rows.
with the default behavior only two nodes will work for a map/reduce
task., isn't it ?
if i do a custom input that split the tabl
I have found this from maillist, maybe update to cdh3u3 or 0.90.5 will
solve this problem, thanks all
It's supposed to but there are a few leaks such as:
https://issues.apache.org/jira/browse/HBASE-4799
https://issues.apache.org/jira/browse/HBASE-4238
J-D
On Mon, Nov 21, 2011 at 7:32 PM, 吕鹏 <[E
Hi,
It should. I haven't tested the .90, but I tested the hbase trunk a few
month ago vs. ZK 3.4.x and ZK 3.3.x and it was working.
N.
2012/4/5 lulynn_2008
> Hi,
> I found hbase-0.90.2 use zookeeper-3.4.2. Can this version hbase work with
> zookeeper-3.3.4?
>
> Thank you.
>
I have load mydata into hbase.
After hbase calm down, I found that the data size is about 10 times the
orignal data size.
Finally I found the number of directory is /hbase/tableName is much more
than the region number
When I use hadoop fs -dus /hbase/tableName to see the size of the
directorys, I f
17 matches
Mail list logo