s("family"), Bytes.toBytes("qualifier"),
> Bytes.toBytes("value")));
>
> ...
>
> FakeHBase is essentially a factory for fake HBaseAdmin and HTableInterface
> instances.
> Let me know if that works or not for you.
> Thanks,
> C.
>
>
> On
d of mocking
> responses, you will have to populate data in HBase tables but I still feel
> this is more intuitive and reliable.
>
> Regards,
> Dhaval
>
>
>
> From: Adam Phelps
> To: user@hbase.apache.org
> Sent: Monday, 24 June 201
On 6/18/13 4:22 PM, Stack wrote:
> On Tue, Jun 18, 2013 at 4:17 PM, Varun Sharma wrote:
>
>> Hi,
>>
>> If I wanted to write to write a unit test against HTable/HBase, is there an
>> already available utility to that for unit testing my application logic.
>>
>> I don't want to write code that eith
tml.
> (Default: console)
>
> -d | --debug Set DEBUG log levels.
> -h | --helpThis help.
>
> My guess is this facility has rotted from lack of use but if you wanted to
> hook it all up again, the ghosts of an
Right up there with my ruby skills (ie I've done a very little editing
of ruby or jruby scripts). I'm fine with the J part of that, but thats
about it.
- Adam
On 12/19/12 11:17 AM, Michael Segel wrote:
> How is your JRuby skills?
>
>
> On Dec 19, 2012, at 1:05
Is there a reasonably straight forward way to customize the shell's
output for a given table? While many of our tables use string based
data, we have some that use serialized data which is useless to view
directly in the shell on the occasions where it'd be useful for
debugging the output of a M/R
.
Please file a JIRA.
Thanks Adam.
On Thu, Jun 23, 2011 at 4:40 PM, Adam Phelps mailto:a...@opendns.com>> wrote:
(As a note, this is with CDH3u0 which is based on HBase 0.90.1)
We've been seeing intermittent failures of calls to
LoadIncrementalHFiles. When this happens th
(As a note, this is with CDH3u0 which is based on HBase 0.90.1)
We've been seeing intermittent failures of calls to
LoadIncrementalHFiles. When this happens the node that made the call
will see a FileNotFoundException such as this:
2011-06-23 15:47:34.379566500 java.net.SocketTimeoutExceptio
hanged slightly during the upgrade?
-Todd
On Fri, Apr 29, 2011 at 1:11 PM, Adam Phelps mailto:a...@opendns.com>> wrote:
I could believe that, although I was under the impression that these
files are actually incorporated into the existing region files.
Still, its defin
are probably not deleted, but moved to the appropriate region
subdirectory under /hbase.
On Fri, Apr 29, 2011 at 1:15 PM, Adam Phelps wrote:
I just verified this, and the hfiles seem to be deleted one at a time as
the bulk load runs.
- Adam
On 4/28/11 4:28 PM, Stack wrote:
I took a look
ance.
Can you figure what is doing the delete? At what stage? Is it as
completebulkload runs?
St.Ack
On Thu, Apr 28, 2011 at 10:59 AM, Adam Phelps wrote:
We were using a backup scheme for our system where we have map-reduce jobs
generating HFiles, which we then loaded using LoadIncrementalHFi
We were using a backup scheme for our system where we have map-reduce
jobs generating HFiles, which we then loaded using LoadIncrementalHFiles
before making a remote copy of them using distcp.
However we just upgraded hbase (we're using cloudera's package, so we
went from CDH3B4 to CDH3U0, bot
On 3/31/11 12:41 PM, Ted Yu wrote:
Adam:
I logged https://issues.apache.org/jira/browse/HBASE-3721
Thanks for opening that. I haven't delved much into the HBase code
previously, but I may take a look into this since it is causing us some
trouble currently.
- Adam
On 3/30/11 8:39 PM, Stack wrote:
What is slow? The running of the LoadIncrementHFiles or the copy?
Its the LoadIncrementHFiles portion.
If
the former, is it because the table its loading into has different
boundaries than those of the HFiles so the HFiles have to be split?
I'm sure that co
Does anyone have any suggestions for speeding up LoadIncrementalHFiles?
We have M/R jobs that directly generate HFiles and are then loaded into
HBase via LoadIncrementalHFiles. We're attempting to maintain a backup
of our production HBase on a backup Hadoop cluster by copying the HFiles
there
On 3/21/11 10:13 PM, Stack wrote:
On Mon, Mar 21, 2011 at 7:19 PM, Adam Phelps wrote:
It looks like we've come up against a problem that looks identical to the
one you described. How did you go about manually inserting the two child
regions?
You know the daughter regions because
On 3/6/11 10:08 AM, Marc Limotte wrote:
1. split started on region A
2. region A was offlined
3. The daughter regions were created in HDFS with the reference files
4. .META. was updated for region A
5. server crashed
So, the new daughter entries were never added to .META
On 12/14/10 12:57 AM, Jonathan Gray wrote:
Hey Adam,
Do you need to scan all of the entries in order to know which ones you need to
change the expiration of? Or do you have that information as an input?
I don't have to scan everything, but I also can't pinpoint all the
entries in advance.
On 12/13/10 11:11 AM, Adam Phelps wrote:
Does anyone have suggestions regarding the best way to modify existing
entries in a table?
We have our tables set up such that when we create an entry we set its
timestamp such that the entry has a rough expiration time, ie we have a
TTL on the table as
Does anyone have suggestions regarding the best way to modify existing
entries in a table?
We have our tables set up such that when we create an entry we set its
timestamp such that the entry has a rough expiration time, ie we have a
TTL on the table as a whole and then adjust the time stamp s
On 11/12/10 10:34 AM, Shuja Rehman wrote:
@Adam
Why u not use the configureIncrementalLoad() which automatically sets up a
TotalOrderPartitioner ?
I just wasn't aware of this method until today. If that produces a more
efficient result then I'll be switching to it.
- Adam
On 11/10/10 11:57 AM, Stack wrote:
On Wed, Nov 10, 2010 at 11:53 AM, Shuja Rehman wrote:
oh! I think u have not read the full post. The essay has 3 paragraphs :)
*Should I need to add the following line also
job.setPartitionerClass(TotalOrderPartitioner.class);
You need to specify other
e.
- Adam
On 11/8/10 5:19 PM, Buttler, David wrote:
Could it be speculative execution? You might want to ensure that is turned off.
Dave
-Original Message-----
From: Adam Phelps [mailto:a...@opendns.com]
Sent: Monday, November 08, 2010 4:30 PM
To: mapreduce-u...@hadoop.apache.org; user@h
this out.
- Adam
On 11/5/10 4:01 PM, Adam Phelps wrote:
Yeah, it wasn't the combiner. The repeated entries are actually seen by
the mapper, so before the combiner comes into play. Is there some other
info that would be useful in getting clues as to what is causing this?
- Adam
On 11/5/1
I've noticed an odd behavior with a map-reduce job I've written which is
reading data out of an HBase table. After a couple days of poking at
this I haven't been able to figure out the cause of the problem, so I
figured I'd ask on here.
(For reference I'm running with the cdh3b2 release)
The
re on the list Adam so we make sure we've
the issues covered when we cut the 0.90.0RC.
Will do.
Thanks
- Adam Phelps
node decommissioning process running when this occurred, could
that have contributed to this? I'm trying to understand the causes
behind problems such as this to better avoid them once we move into
production.
Thanks
- Adam Phelps
On 10/18/10 9:05 PM, Stack wrote:
What version of hba
processes, so I'm not sure what to look for to correct
these errors. Any suggestions would be appreciated.
- Adam Phelps
28 matches
Mail list logo