o retrieve this sort of information by
dirty hacks :)
Regards,
Christopher
On Mon, Jan 9, 2012 at 11:36 PM, Jean-Daniel Cryans wrote:
> It would definitely be interesting, please do report back.
>
> Thx,
>
> J-D
>
> On Mon, Jan 9, 2012 at 2:33 PM, Christopher Dorner
>
getServerAddress().getHostname()
J-D
On Mon, Jan 9, 2012 at 1:19 PM, Christopher Dorner
wrote:
Hi,
i am using the input of a mapper as a rowkey to make a GET Request to a
table.
Is it somehow possible to retrieve information about how much data had to be
transferred over network or how many of the requests
not on
the same node?
That would be some really cool and useful statistics for us :)
Thank you,
Christopher Dorner
Hi,
I am trying to do bulkload into a HBase Table with one column family,
using a custom mapper to create the PUTs according to my needs. (Machine
setup at the end of the mail)
Unfortunately, with our data it is a bit hard to presplit the tables
since the keys are not predictable thaat good
is coming
from completebulkload). It is trying to rename the files.
-Original Message-
From: Christopher Dorner [mailto:christopher.dor...@gmail.com]
Sent: Tuesday, December 13, 2011 3:29 AM
To: user@hbase.apache.org
Subject: bulkload on fully distributed mode - permissions
Hi,
i stumbled
Hi,
i stumbled upon an error which was not present in pseudo distributed mode.
When i try to run a bulkload, it fails after creating the hfiles with
following error:
org.apache.hadoop.security.AccessControlException:
org.apache.hadoop.security.AccessControlException:
org.apache.hadoop.securi
Mohammad Tariq:
Hello Christopher,
I don't have 127.0.1.1 in my hosts file.
Regards,
Mohammad Tariq
On Thu, Dec 1, 2011 at 6:42 PM, Christopher Dorner
wrote:
Your hosts file (/etc/hosts) should contain only sth like
127.0.0.1 localhost
Or
127.0.0.1
It should not contain sth
Your hosts file (/etc/hosts) should contain only sth like
127.0.0.1 localhost
Or
127.0.0.1
It should not contain sth like
127.0.1.1 localhost.
And i think u need to reboot after changing it. Hope that helps.
Regards,
Christopher
Am 01.12.2011 13:24 schrieb "Mohammad Tariq" :
> Hello list,
>
>
nt, due to it already running. Have you tried
restarting them?
Lars
On Nov 24, 2011, at 16:31, Christopher Dorner
wrote:
Yes, snappy is installed.
Lars, do you mean with "the rest" the hadoop datanode, namenode,
jobtracker, hmaster, etc.?
I am not really sure, but I think I stopped t
t;
> On Nov 24, 2011, at 12:39 PM, Gaojinchao wrote:
>
>> It seems you need install snappy.
>>
>>
>> -邮件原件-
>> 发件人: Christopher Dorner [mailto:christopher.dor...@gmail.com]
>> 发送时间: 2011年11月24日 18:10
>> 收件人: user@hbase.apache.org
>> 主题: Erro
Hi,
i posted this question already on the cloudera list, but i didn't get a
solution yet. So i want to ask here again.
I am currently running Hadoop and HBase in pseudo-distributed mode
using CDH3-u2. In this update, snappy was included for HBase 0.90.4-
cdh3u2.
I wanted to try it out and co
, Nov 2, 2011 at 10:14 AM, Christopher Dorner<
christopher.dor...@gmail.com> wrote:
Will HBase 0.92 support MultiHFileoutputFormat and IncrementalLoad for
different Tables?
Is there a comfortable way to make it work for HBase 0.90.4 as well? I am
using Cloudera's CDH3u2.
Am 30.10
Will HBase 0.92 support MultiHFileoutputFormat and IncrementalLoad for
different Tables?
Is there a comfortable way to make it work for HBase 0.90.4 as well? I
am using Cloudera's CDH3u2.
Am 30.10.2011 12:57, schrieb Christopher Dorner:
Hi,
i am facing a similar problem. I Need to r
Hi,
i am facing a similar problem. I Need to read a large file to put into
different hbase tables. Until now i have done it with
MultiTableOutputFormat directly from the Mapper. Thats works ok, but i
believe it will become quite slow when i try larger files. But I thought
it is a good chance
Hi,
I am considering doing Reduce-Side-Joins, where one input would be read
from HDFS and another one from a HBase Table.
is it somehow possible to use
TableMapReduceUtil.initTableMapperJob(table, scan, Mapper_HBase.class,
..., job);
and
MultipleInputs(job, path, ..., Mapper_HDFS.class)
topher
Am 04.10.2011 23:57, schrieb Jean-Daniel Cryans:
Maybe try a different schema yeah (hard to help without knowing
exactly how you end up overwriting the same triples all the time tho).
Setting timestamps yourself is usually bad yes.
J-D
On Tue, Oct 4, 2011 at 7:14 AM, Christopher Dorner
er on the configuration that your pass in the
job setup.
Using HTablePool in a single threaded application doesn't offer more
than just storage for your HTables.
Hope that helps,
J-D
On Sat, Oct 1, 2011 at 4:05 AM, Christopher Dorner
wrote:
Hallo,
i am building a RDF Store using
m 01.10.2011 13:19, schrieb Christopher Dorner:
Hallo,
I am reading a File containing RDF triples in a Map-job. the RDF triples
then are stored in a table, where columns can have lots of versions.
So i need to store many values for one rowKey in the same column.
I made the observation, that read
Christopher Dorner:
Hallo,
I am reading a File containing RDF triples in a Map-job. the RDF triples
then are stored in a table, where columns can have lots of versions.
So i need to store many values for one rowKey in the same column.
I made the observation, that reading the file is very fast
Hallo,
I am reading a File containing RDF triples in a Map-job. the RDF triples
then are stored in a table, where columns can have lots of versions.
So i need to store many values for one rowKey in the same column.
I made the observation, that reading the file is very fast and thus some
value
Hallo,
i am building a RDF Store using HBase and experimenting with different
index tables and Schema Designs.
For the input, i have a File where each line is a RDF triple in N3 Format.
I need to write to multiple Tables since i need to build several index
tables. For the sake of reducing IO
21 matches
Mail list logo