gionserver.HRegionServer: STOPPED: Unhandled exception. Starting shutdown.
>
> Maybe anyone can hive a hint on this? It seems there isn't a lot of people
> using ceph+hbase, but don't lose anything asking :)
>
> This is my current hbase-site.xml just in case
>
>
>
>
>
> hb
On Tue, Dec 29, 2015 at 10:42 PM, Jose M wrote:
> Hi,
>
>
> Sorry to reask, but I'm not reaching any further. I think i'm missing some
> really obvious permission configuration, as I was testing the hadoop command
> and it only works with one user (the one I use to install ceph).
>
>
> This works
Hi,
Sorry to reask, but I'm not reaching any further. I think i'm missing some
really obvious permission configuration, as I was testing the hadoop command
and it only works with one user (the one I use to install ceph).
This works
ubuntu@cephmaster:~/ceph-cluster$ hadoop fs -put ceph.conf /
Hi!
Does anyone succesfully configure hbase to use ceph? I'm having some problems
with it, maybe anyone can help.
I have ceph already running and ceph-hadoop bindings installed ('hadoop fs -ls
/' working). I'm trying Hbase in pseudo distributed mode, but when starting
hbase-master I'm gettin
On 04/14/2014 11:38 PM, Noah Watkins wrote:
> This strikes me as a difference in semantics between HDFS and CephFS,
> and like Greg said it's probably based on HBase assumptions. It'd be
> really helpful to find out what the exception is. If you are building
> the Hadoop bindings from scratch, you
This strikes me as a difference in semantics between HDFS and CephFS,
and like Greg said it's probably based on HBase assumptions. It'd be
really helpful to find out what the exception is. If you are building
the Hadoop bindings from scratch, you can instrument `listStatus` in
`CephFileSystem.java`
This looks like some kind of HBase issue to me (which I can't help
with; I've never used it), but I guess if I were looking at Ceph I'd
check if it was somehow configured such that the needed files are
located in different pools (or other separate security domains) that
might be set up wrong.
-Greg
Hi,
I am trying to make HBase 0.96 work on top of Ceph 0.72.2. When I start
the Hbase-master I am getting this error.
2014-04-05 23:39:39,475 DEBUG [master:pltrd023:6] wal.FSHLog: Moved
1 WAL file(s) to /hbase/data/hbase/meta/1588230740/oldWALs
2014-04-05 23:39:39,538 FATAL [master:host:6
On Fri, Jul 19, 2013 at 11:04 PM, Noah Watkins wrote:
> On Fri, Jul 19, 2013 at 8:09 AM, ker can wrote:
>>
>> With ceph is there any way to influence the data block placement for a
>> single file ?
>
> AFAIK, no... But, this is an interesting twist. New files written out
> to HDFS, IIRC, will by
On Fri, Jul 19, 2013 at 8:09 AM, ker can wrote:
>
> With ceph is there any way to influence the data block placement for a
> single file ?
AFAIK, no... But, this is an interesting twist. New files written out
to HDFS, IIRC, will by default store 1 local and 2 remote copies. This
is great for MapR
On Thu, Jul 18, 2013 at 3:13 PM, ker can wrote:
>
> the hbase+hdfs throughput results were 38x better.
> Any thoughts on what might be going on ?
>
>
Looks like this might be a data locality issue. After loading the table,
when I look at the data block map of a region's store files its spread ou
how is your hbase performance on ceph compared to hdfs - was there some
special knobs that you needed to turn ?
I'm running a few hbase tests with the yahoo cloud serving benchmark -
ycsb with ceph & hdfs and the results were very surprising considering
that the hadoop + ceph results were not a
Yep, its working now. I guess I the deprecated annotation for
createNonRecursive threw me off. :o)
@Deprecated
public FSDataOutputStream createNonRecursive(Path path, FsPermission
permission,
boolean overwrite,
___
ceph-users mailing list
cep
Yup, that was me.
We have hbase working here.
You'll want to disable localized reads, as per bug #5388. That bug
will cause your regionservers to crash fairly often when doing
compaction.
You'll also want to restart each of the regionservers and masters
often (We're doing it once a day) to mitigate
Yeh, check if the merge removed createNonRecursive. I specifically
remember adding that function for someone on the mailing list that was
trying to get HBase working.
http://tracker.ceph.com/issues/4555
On Wed, Jul 17, 2013 at 3:41 PM, ker can wrote:
> this is probably something i introduced in
this is probably something i introduced in my private version ... when i
merged the 1.0 branch with the hadoop-topo branch. Let me fix this and try
again.
On Wed, Jul 17, 2013 at 5:35 PM, ker can wrote:
> Some more from lastIOE.printStackTrace():
>
> Caused by: java.io.IOException: java.io.IOE
Some more from lastIOE.printStackTrace():
Caused by: java.io.IOException: java.io.IOException: createNonRecursive
unsupported for this filesystem class
org.apache.hadoop.fs.ceph.CephFileSystem
at
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.init(SequenceFileLogWriter.java
On Wed, Jul 17, 2013 at 11:07 AM, ker can wrote:
> Hi,
>
> Has anyone got hbase working on ceph ? I've got ceph (cuttlefish) and
> hbase-0.94.9.
> My setup is erroring out looking for getDefaultReplication &
> getDefaultBlockSize ... but I can see those defined in
> core/org/apache/hadoop/fs/ceph/
Hi,
Has anyone got hbase working on ceph ? I've got ceph (cuttlefish) and
hbase-0.94.9.
My setup is erroring out looking for getDefaultReplication &
getDefaultBlockSize ... but I can see those defined in
core/org/apache/hadoop/fs/ceph/CephFileSystem.java
hbase seems to be talking to ceph okay ...
19 matches
Mail list logo