On behalf of the Apache HBase PMC, I am pleased to announce that Lars
Francke has accepted the PMC's invitation to become a committer on the
project.
We appreciate all of Lars' great work thus far and look forward to
continued involvement.
Please join me in congratulating LarsF! (Opting to use la
Thanks Misty for working on this. I only had a chance to try on my iPhone, but
there it looks a bit cramped :(
Lars
Sent from my iPhone
> On 23 Oct 2015, at 07:17, Misty Stanley-Jones
> wrote:
>
> Hi all,
>
> We are currently using the reFlow Maven site skin. I went looking around
> and f
I noticed similar ZK related issues but those went away after changing the ZK
directory to a permanent directory along with the HBase root directory. Both
point now to a location in my home folder and restarts work fine now. Not much
help but wanted to at least state that.
Lars
Sent from my
qq.com> wrote:
>
> Hi, Lars
>
>
> Thanks very much for sharing this example.
>
>
> By the way, does that mean the Puts and Deletes I submitted through table.put
> and table.delete won't be buffered and will be flushed immediately?
>
>
>
>
>
Hi,
This has been moved to BufferedMutator. I have an example here:
https://github.com/larsgeorge/hbase-book/blob/master/ch03/src/main/java/client/BufferedMutatorExample.java
Best,
Lars
On Sun, Apr 5, 2015 at 10:49 AM, donhoff_h <165612...@qq.com> wrote:
> Hi, experts.
>
>
> I migrated my HBase
Congratulations Sean, and welcome!
On Thu, Mar 26, 2015 at 6:26 PM, Andrew Purtell wrote:
> On behalf of the Apache HBase PMC I"m pleased to announce that Sean Busbey
> has accepted our invitation to become a PMC member on the Apache HBase
> project. Sean has been an active and positive contribu
Great work everyone! Congratulations, this is the most awesome community to
be in.
Some coverage:
-
https://blogs.apache.org/foundation/entry/the_apache_software_foundation_announces72
-
http://www.heise.de/developer/meldung/Big-Data-HBase-1-0-erschienen-2558708.html
On Tue, Feb 24, 2015 at 9:30
We had others report their use earlier (previous thread about removing it).
So it is definitely in use. But... I agree it needs to be completed. I know
I have been tardy on this and need to speed up. :( Darn work always comes
in between.
On Thu, Sep 18, 2014 at 11:48 PM, Andrew Purtell
wrote:
>
Hi Tian-Ying,
I have taken over (I asked for it) the Thrift2 committer responsibilities,
though am often out on customer business and am therefore rather slow. We
are working on all issues related Thrift2 for now in JIRA-8818 (
https://issues.apache.org/jira/browse/HBASE-8818). One of the subtasks
Congrats! Welcome aboard.
On Feb 7, 2013, at 6:19, Ted Yu wrote:
> Hi,
> We've brought in one new Apache HBase Committer: Devaraj Das.
>
> On behalf of the Apache HBase PMC, I am excited to welcome Devaraj as
> committer.
>
> He has played a key role in unifying RPC engines for 0.96
> He fixe
che Telekom" by Juergen
Urbanski, Chief Technologist at T-Systems
- "Low latency data processing with Impala" by Lars George, Director EMEA
Services at Cloudera
We are looking for further volunteers to submit talks, so if you are working in
the "new" Big Data or NoSQL spac
+1
Congrats and good on you!
On Jan 2, 2013, at 9:02 PM, Stack wrote:
> Good on you lads. Thanks for all the great contribs so far.
> St.Ack
>
>
> On Wed, Jan 2, 2013 at 11:37 AM, Jonathan Hsieh wrote:
>
>> Along with bringing in the new year, we've brought in two new Apache
>> HBase Comm
u, Nov 29, 2012 at 3:43 PM, Adrien Mogenet
>>> wrote:
>>>
>>>> I'm writing this quick answer with my memories (I read this book maybe
>>>> one
>>>> year ago) : it's still awesome to understand concepts, good recipes,
>>>> HFi
Hi Otis,
My initial reaction was, "interesting idea". On second thoughts though I do not
see how this makes more sense compared to what we have now. HFiles combined
with Bloom filters are fast to look up anyways. Adding Lucene as another
"Storage Engine" (getting us close to Voldemort or MySQL
That is spot on Stack, it is the worst case scenario as you describe, i.e. all
cached information is stale.
Lars
On Aug 19, 2012, at 6:40 AM, Stack wrote:
> On Sat, Aug 18, 2012 at 2:13 AM, Lin Ma wrote:
>> Hello guys,
>>
>> I am referencing the Big Table paper about how a client locates a t
That is correct, the client blocks and retries a configurable amount of time
until the regions are available again.
Lars
On Aug 2, 2012, at 7:01 AM, Mohit Anchlia wrote:
> On Wed, Aug 1, 2012 at 12:52 PM, Mohammad Tariq wrote:
>
>> Hello Mohit,
>>
>> If replication factor is set to som
It is basically unset:
this.regionSplitLimit = conf.getInt("hbase.regionserver.regionSplitLimit",
Integer.MAX_VALUE);
(from CompactSplitThread.java).
The number of regions is OK until you dilute the available heap share too much.
So you can have >1000 regions (given the block index,
am I
> still better to restart everything?
>
> JM
>
> 2012/7/5, Lars George :
>> Hi JM,
>>
>> So you already wiped everything on the HDFS level? The only thing left is
>> ZooKeeper. It should not hold you back, but it could be having an entry in
>> /hbase/
Hi JM,
So you already wiped everything on the HDFS level? The only thing left is
ZooKeeper. It should not hold you back, but it could be having an entry in
/hbase/table? Could you try the ZK shell and do an ls on that znode?
If at all, if you wipe HDFS anyways, please also try wiping the ZK dat
See https://issues.apache.org/jira/browse/HBASE-2947 for details.
On Jul 5, 2012, at 12:26 PM, Jean-Marc Spaggiari wrote:
> From Lars' book:
>
> "The batch() calls currently do not support the Increment instance,
> though this should change in near future".
>
> Which version are you using, it's
gards
>
> Ben
> On 2 Jul 2012, at 11:11, Lars George wrote:
>
>> Hi,
>>
>> Please see http://wiki.apache.org/hadoop/Hbase/PoweredBy
>>
>> Everyone on this list, kindly consider verifying that your entry on the
>> Powered By page is current.
>&
Hi,
Please see http://wiki.apache.org/hadoop/Hbase/PoweredBy
Everyone on this list, kindly consider verifying that your entry on the Powered
By page is current.
For those who are users of HBase but have not added yourself to the above page:
if you are happy to share this with us and the rest
Hi lztaomin,
> org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode
> = Session expired
indicates that you have experienced the "Juliet Pause" issue, which means you
ran into a JVM garbage collection that lasted longer than the configured
ZooKeeper timeout threshold.
Hi Mike,
> Running RS on a machine where DN isn't running?
I am not following here. Andy said that both are on the same node. Where in
this thread did someone imply something else? Just curious.
Cheers,
Lars
On Jul 2, 2012, at 7:11 AM, Michael Segel wrote:
> I'm sorry I'm losing it.
>
>
or to
>> operate on that region only and not on all the regions of a particular
>> table??Thank you.
>>
>> Regards,
>>Mohammad Tariq
>>
>>
>> On Wed, Jun 27, 2012 at 11:57 PM, Lars George
>> wrote:
>>> Hi Mohammad,
>&
, at 7:44 PM, Mohammad Tariq wrote:
> Hello Lars,
>
>Thank you so much for the quick response.Actually, I want to
> run my MapReduce jobs on a region that contains a specific set of
> data.
>
> Regards,
> Mohammad Tariq
>
>
> On Tue, Jun 26, 2012 at 9
Hi Mohammad,
The code runs on the server which is opening the region. It sounds to me that
this is not what you want and that you need to have access to some sort of
resources only available on one specific server? Because if that is not the
case, then you are simply using the coprocessors the
Hi Anand,
The stop-hbase.sh needs two things a) the list of hostnames in the
conf/regionservers file and b) ssh access to all machines. The script then
iterates over that list and sends the "hbase regionserver stop" command to each
machine using ssh.
In the past we had a shell command that co
3:40:11 KST 2012 in 4 milliseconds
>>
>
> As you see, data blocks of the HFile are stored across two different
> datanodes (hadoop-145 and hadoop-143).
>
> Let say a map task runs on hadoop-145 and needs to access the block 7. Then
> the map task needs to remotely access t
Hi,
I have done this at a customer site to overcome the 0.90.x slow WAL
performance. With one RS per DN we bottlenecked, with 5-7 RS per DN we were
able to hit the target rate.
Please note that we did this in lieu of the proper built-in options like WAL
compression, multiple WAL, or n-way wri
Hi Ash,
What Dave said.
MemStore's are part of the write path only. BlockCache is part of the read path
only respectively. They only compete for heap on their own right, but have
otherwise no direct relation.
Lars
On Jun 13, 2012, at 10:15 PM, Dave Revell wrote:
> Here's a good starting poin
Hi Ben,
See inline...
On Jun 15, 2012, at 6:56 AM, Ben Kim wrote:
> Hi,
>
> I've been posting questions in the mailing-list quiet often lately, and
> here goes another one about data locality
> I read the excellent blog post about data locality that Lars Ge
What Amandeep says and also keep in mind that with the current selection
process HBase holds O(log N) files for N data. So say for 2GB region sizes you
get 2-3 files. This means it very "aggressively" is compacting files, and most
of these are "all files included" once... which are the promoted
Hi,
Please see the following link:
http://berlinbuzzwords.de/wiki/hbase-workshop-and-hackathon
If you plan to attend Berlin Buzzwords this year, and are happy to hang out one
more day with your fellow HBasistas, then please consider the above event. A
few committers will be attending and help
Hi Sever,
Use the getTable() method of the given coprocessor environment. It gives you
access to any table in the cluster.
Lars
On May 27, 2012, at 15:48, Sever wrote:
>
> Hello,
>
> I did not find any clear documentation on this aspect: is it possible to
> write from a co-processor that r
Hi Ben,
The answer is c) below.
HBase cannot split a row, so you are now carrying around a 10GB region. It does
not matter what you set the max filesize to, you are bound by this minimum
unit. It seems you should either store some data somewhere else, for example
outside of HBase (HDFS maybe?
Hi Mete,
OpenTSDB uses the "natural" availability of the metrics ID to bucket the
metrics by type. After that it relies on scanner batching, and block loads.
For your uses case you could bin by time frames, say for example hash the start
of each hour into an MD5 and concatenate it with the actu
+1, there are quiet a few missing that should be in there. Please create a JIRA
issue so that we can discuss and agree on which to add.
Lars
On Apr 5, 2012, at 6:23 PM, Stack wrote:
> On Thu, Apr 5, 2012 at 4:20 AM, Bing Li wrote:
>> Dear all,
>>
>> I found some methods existed in HTable were
http://www.hbasecon.com/
On Apr 2, 2012, at 10:16 PM, Marcos Ortiz wrote:
> I heard yesterday that the first conference dedicated to HBase will be in the
> next days. Where I can fin all the information about the event?
>
> regards and best wishes
>
> --
> Marcos Luis Ortíz Valmaseda (@marcos
Please note though that YCSB 0.1.4 is now fully mavenized and uses the POM to
pull in the various dependencies, as well as supplying a script that you can
use to avoid the lengthy java command line. So the build steps and invocation
have changed a bit, but the overall idea stays the same.
Lars
Hi Jon,
Please see the help the shell prints out, it has a section on how to use binary
characters. Important is to enclose the code points in double quotes - courtesy
of JRuby. The single quotes are literals only.
HTH,
Lars
On Mar 19, 2012, at 6:03 PM, Jon Bender wrote:
> Hi everyone,
>
> I
d 1000 each produce different repeatable
>> results, and changing the families added as produces different reliable
>> results. There is no "sometimes" or "occasional", and if there was a
>> Major Compaction, it wouldn't happen that often.
>>
>>
Hi Peter,
Could you be hitting HBASE-5121? Or even HBASE-2856?
Lars
On Mar 17, 2012, at 20:46, Peter Wolf wrote:
> Hello,
>
> A couple of days ago, I asked about strange behavior in my "Scan.addFamiliy
> reduces results" thread.
>
> I want to confirm that I did find a bug, and if so, how to
Hi Konrad,
The masters simply use a znode in ZooKeeper to track which is which. This is a
basic and fundamental mechanism that ZK provides and nothing above that is
needed.
Note that you can start masters in non-active mode to enforce that you start
the active on a node of your choice.
Lars
Hi Bing,
Add a "L" to the number, e.g.
ts.add(new Long(1329640759372L));
Lars
On Feb 20, 2012, at 2:34 AM, Bing Li wrote:
> Dear all,
>
> I am running the sample about TimeStampFilter as follows.
>
>List ts = new ArrayList();
>ts.add(new Long(1329640759364));
the gap/whole, and be on my merry way. Right
> now I resort to merging regions with hadoop fs -cp, which is a pain in
> a butt if there are too many of them to merge.
>
> Best,
>
> -Jack
>
> On Mon, Dec 26, 2011 at 1:49 AM, Lars George wrote:
>> You could
You could also have a go at https://issues.apache.org/jira/browse/HBASE-4009
Simple script based on the check_meta.rb to patch holes.
Lars
On Dec 26, 2011, at 4:23 AM, Ted Yu wrote:
> OfflineMetaRepair class is in 0.90.5
> See HBASE-4377
>
> But maybe you're looking for an online solution.
>
Hi,
The current release candidate for 0.92 has all the coprocessor goodness
included. Try it if you like and let us know if it worked out for you.
Cheers,
Lars
On Dec 13, 2011, at 8:04 AM, Arsalan Bilal wrote:
> I am currently using habse 0.90.4.
> I want to implement Coprocessors. I know that
Could you use the ComressionTest to verify that the library path is set up
properly?
$ hbase org.apache.hadoop.hbase.util.CompressionTest
hdfs://:8020//test.lzo lzo
Does it report OK? Same for Snappy? The reason I am asking is that when it does
not find the native libs it uses no compression a
On Fri, Dec 9, 2011 at 2:31 PM, Lars George wrote:
>> Hi,
>>
>> Do you have maybe an issue with naming. HBase takes the hostname (as shown
>> in the UI and the ZK dump there) and hints that to the MR framework. But if
>> that resolves to different names, then no match
Hi,
Do you have maybe an issue with naming. HBase takes the hostname (as shown in
the UI and the ZK dump there) and hints that to the MR framework. But if that
resolves to different names, then no match can be made and the node to run the
task on is chosen by random. Could you verify?
Lars
On
Could you please pastebin your Hadoop, HBase and ZooKeeper config files?
Lars
On Dec 1, 2011, at 11:23 AM, Mohammad Tariq wrote:
> Today when I issued bin/start-hbase.sh I ran into the following error -
>
> Thu Dec 1 15:47:30 IST 2011 Starting master on ubuntu
> ulimit -n 1024
> 2011-12-01 15
Hi Ed,
You need to be more precise I am afraid. First of all what does "some node
always dies" mean? Is the process gone? Which process is gone?
And the "error" you pasted is a WARN level log that *might* indicate some
trouble, but is *not* the reason the "node has died". Please elaborate.
Also
Hi Sam,
You need to handle them all separately. The note - I assume - was solely
explaining the fact that the "load" of a region server is defined by the number
of regions it hosts, not the number of tables. If you want to precreate the
regions for one or more than one table is the same work: c
Hey,
Looks like you have a corrupted ZK. Try and stop ZK (after stopping HBase of
course) and restart it. If that also fails, then wipe the data dir ZK uses
(check the config, for example the zoo.cfg for stand alone ZK nodes). ZK is
going to recreate the data files and it should be able to move
There is no one that ever reported an adverse affect caused by the metrics
collections.
On Nov 30, 2011, at 12:15 PM, Rita wrote:
> I see. Is there a performance penalty when exposing jmx metrics?
>
>
> On Wed, Nov 30, 2011 at 3:21 AM, Lars George wrote:
>
>&g
o read cell values
> of the currently processed row, change them (e.g. by adding 1) and let
> them pass so that the changed values are written into the original
> table.
>
> Thanks,
> Thomas
>
> -Original Message-
> From: Lars George [mailto:lars.geo...@gmail.co
Hi Thomas,
There are some examples in my book, or here
https://github.com/larsgeorge/hbase-book/tree/master/ch04/src/main/java/coprocessor.
You can use the live cycle methods start() and stop() to create the resources
you need. Since the class is instantiated only once this is a common approach
You mean like this?
> $ echo ruok | nc 127.0.0.1 5111
> imok
In combination with ZKs stat command? That would be similar to what is exposed
by the JMX metrics for HBase?
Lars
On Nov 30, 2011, at 1:01 AM, Rita wrote:
> Is it possible to get hbase statistics using, nc? Similar to zookeeper
>
>
Hi,
Did you add the list of servers to the regionservers file in the
$HBASE_HOME/conf/ dir? Are you using Cygwin? Or what else is your environment?
Lars
On Nov 26, 2011, at 7:37 AM, Vamshi Krishna wrote:
> Hi i am running hbase on 3 machines, on one node master and regionserver,
> on other two
Your best bet - short of tailing the logs - seems to use the compactionQueue
metric, that is available through Ganglia and JMX. It should go back to zero
when all compactions are done.
Lars
On Nov 27, 2011, at 1:41 PM, Rita wrote:
> Hello,
>
> When I do a major compaction of a table (1 billio
ov 24, 2011, at 16:31, Christopher Dorner
>> wrote:
>>
>>> Yes, snappy is installed.
>>>
>>> Lars, do you mean with "the rest" the hadoop datanode, namenode,
>>> jobtracker, hmaster, etc.?
>>> I am not really sure, but I think
lly sure, but I think I stopped the services first, before
> installing hadoop-0.20-native package.
>
> Christopher
>
> Am 24.11.2011 12:47, schrieb Lars George:
>> He has Snappy, or else the CompressionTest would not work.
>>
>> I can only imagine that there is an iss
He has Snappy, or else the CompressionTest would not work.
I can only imagine that there is an issue with the RS process not being able to
access the library path. Have you installed the hadoop-0.20-native package
while the rest was already running?
Lars
On Nov 24, 2011, at 12:39 PM, Gaojincha
BTW, here the current version:
http://ofps.oreilly.com/titles/9781449396107/architecture.html#archstorage
Please add feedback online too :)
On Jul 6, 2011, at 10:37 PM, Lars George wrote:
> Hi Florin,
>
> Note that this was way old stuff. I updated that chapter the last 3-4 days.
Hi Florin,
Note that this was way old stuff. I updated that chapter the last 3-4 days.
Inline...
1. How many MemStores can have Region?
> HBDG: "A HRegion also has a MemStore"
> HBA: "A Store hosts a MemStore". A Store corresponds to a column family
> for a table for a given region.
>
Each Sto
king about it, I think the current approach is best :)
>
> Btw, I can find the start-hush.sh to launch the cluster, but not the script
> that run the examples.
>
> Tks,
> - Eric
>
>
>
> On 14/06/11 08:57, Lars George wrote:
>
>> Hi Eric,
>>
>> Sor
Hi Eric,
Sorry for the late reply. More inline...
I've given your hbase-book link on github [1] to Ioan (GSoC2011, see
> previous mail I just sent) to help him dig into the HBase API.
>
Great! Let me know if you find issues along the way.
> I've also checked-out your repo to learn more, the ba
Hi Kumar,
Not right now unfortunately. Sorry.
Regards,
Lars
On Sun, Jun 12, 2011 at 2:59 AM, Kumar Kandasami <
kumaravel.kandas...@gmail.com> wrote:
> Just curious, is there any early access (similar to Manning MEAP) to buy
> the book at this time. Working on a proof of concept I think this bo
Hi Jason,
This was discussed in the past, using the HFileInputFormat. The issue
is that you somehow need to flush all in-memory data *and* perform a
major compaction - or else you would need all the logic of the
ColumnTracker in the HFIF. Since that needs to scan all storage files
in parallel to a
Yes please, plus a patch would be awesome :)
On Fri, May 20, 2011 at 5:24 PM, Lucian Iordache
wrote:
> Hi guys,
>
> I've just found a problem with the class TableSplit. It implements "equals",
> but it does not implement hashCode also, as it should have.
> I've discovered it by trying to use a Ha
You can also check the compactionQueue on all RegionServers through
the metrics or JMX.
On Thu, May 19, 2011 at 5:01 PM, Stack wrote:
> On Thu, May 19, 2011 at 6:47 AM, Oleg Ruchovets wrote:
>> --What is the way to see how the major compaction process is executing (log
>> files or something else
key ranges and their salt prefixes. It's a bit like
> exporting some core? functionality to the client.
>
> Strange, I fell I missed your point :)
> Tks,
>
> - Eric
>
> Sidenote: ...and yes, it seems I will have to learn some ruby stuff (should
> get used to, cause
http://code.google.com/p/socorro/ ?
> I can find python scripts, but no jruby one...
>
> Aside the hash function I could reuse, are you saying that range queries are
> possible even with hashed keys (randomly distributed)?
> (If possible with the script, it will also be possible from
Hi Eric,
Mozilla Socorro uses an approach where they bucket ranges using
leading hashes to distribute them across servers. When you want to do
scans you need to create N scans, where N is the number of hashes and
then do a next() on each scanner, putting all KVs into one sorted list
(use the KeyCo
Hi Eran,
We need more details. It sounds like an issue with the ZooKeeper
quorum. In other words that it cannot connect to the ZK servers. Often
this is then logged during the task failures as it trying to connect
to localhost. Could you grab more logs and up them to pastebin or some
such?
Lars
Hi Alex,
This was added in https://issues.apache.org/jira/browse/HBASE-1200
affecting 0.90 and later only.
But this is an interesting question, the bulk import using HFOF can
handle compression, but not bloom filters. If you have bloom fliters
enables then compactions will add them, but it may ma
Hi,
Whenever I am with clients and we design for HBase the first thing I
do is spent a few hours explaining exactly that scenario and the
architecture behind it. As for the importing and HBase simply lacking
a graceful degradation that works in all cases I nowadays quickly
point to the bulk import
Hi,
That is an interesting question and I noticed the same: stopped instances
(backed by EBS) get a new IP at start. Only restarts has the IP survive.
Not sure how to handle this but add some extra scripts to patch the configs on
start.
Messy.
Anyone with experience willing to chime in?
L
Hi,
If you expect a lot of misses with that approach then enable bloom filters on
the second table for fast lookups of misses.
Lars
On Mar 11, 2011, at 9:44, Amandeep Khurana wrote:
> You can scan through one table and see if the other one has those rowids or
> not.
>
> On Thu, Mar 10, 2011
Hi,
I found the opposite. Depends on the queries but if you are not doing a full
table scan the direct HBase handler approach is actually faster as it is more
fine grained than the usual Hive partition granularity of a day or so.
The scan can make use of row range selection and column familie
Hi,
The files are created mainly by the region servers, so yes, this needs to be on
all servers to work as expected.
Lars
On Mar 11, 2011, at 11:31, Felix Sprick wrote:
> Hi,
>
> I have a HBase cluster with 1 master and 3 slaves. I am wondering what
> needs to be done in order to change the
Hi,
How is he supposed to read the data form HDFS? Reading the low level store
files is difficult at the least and also does not take into account unflushed
edits.
You better provide him with the data on HDFS yourself using some sort of export
or have him brush up on his API skillz.
You coul
Hi,
You seem to have managed to stuff up your hbase-site.xml file. Use xmllint for
example to verify where you borked it.
Lars
On Mar 11, 2011, at 17:11, Shahnawaz Saifi wrote:
>
> Hi,
>
> I am trying to configure hbase on 7 nodes cluster, which has hadoop master
> namenode, secondary nam
This also has been improved in 0.90.0 as per
http://issues.apache.org/jira/browse/HBASE-3132
What version are you on Otis?
Lars
On Tue, Mar 8, 2011 at 10:25 PM, Ryan Rawson wrote:
> Ascii table tells me bang = 33
>
> http://www.asciitable.com/
>
> so the average key len is 33.
>
> :-)
>
> -rya
so what point do I
> separate out ZK to be its own management piece.
>
>
>
>
>
>
> On 3/1/11 9:15 AM, "Lars George" wrote:
>
>>Hi Joseph,
>>
>>You are talking about a full distributed setup - just all with single
>>nodes? So your ZooKeeper is s
Hi Joseph,
You are talking about a full distributed setup - just all with single
nodes? So your ZooKeeper is started and maintained by you as well
separately? If so, then sure you can run it on your own. Well, even
with HBase you can run this on your own using the supplied version
that comes with
Hi Ondrej,
+1 on what Andrey said. This was replaced by the export/import tools.
Also see the linked issues in the last comment by Jon in the issue you
referred to. They point to the new stuff.
Lars
On Tue, Mar 1, 2011 at 3:52 PM, Andrey Stepachev wrote:
> Why you can't use org.apache.hadoop.hb
Hi James,
Could you tell us
a) what OS are you using?
b) what version of HBase are you using?
c) how did you set up HBase (distro or tarball)?
d) what errors do you get when the shell aborts?
e) does the master UI work?
f) can you use HBase through the API?
Thanks,
Lars
On Tue, Mar 1, 2011 at 1
+1 on 0.91 and have something for the summit. Also great to box the feature set
now as I will base the book on 0.92 - anything else would futile.
On Feb 26, 2011, at 23:24, Jean-Daniel Cryans wrote:
> Woah those are huge tasks!
>
> Also to consider:
>
> - integration with hadoop 0.22, should
What error are you getting? The NPE?
As Tatsuya pointed out, you are using the same time stamps:
private final long ts2 = ts1 + 100;
private final long ts3 = ts1 + 100;
That cannot work, you are overriding cells.
Lars
On Thu, Feb 24, 2011 at 8:34 AM, 陈加俊 wrote:
> HTable
Hi Mike,
The values are Base64 encoded, so you need to use a decoder. HBase
ships with one in the REST package that you can use for example.
Lars
On Wed, Feb 23, 2011 at 7:22 PM, Mike wrote:
> I'm having some issues converting the results of a restful call through
> stargate. I'm returning the
Could be having some "force" flag specified 3 times and asked for
confirmation as well, but I like this feature. Whenever I talk to
people who disable and get stuck it was to prepare a subsequent drop
table call. So this sounds really useful given enough safety latches
in place.
Lars
On Thu, Feb
seems.
Lars
On Sun, Feb 20, 2011 at 3:38 PM, Hari Sreekumar
wrote:
> Hi,
>
> I was going through the HBase architecture blog by Lars George (
> http://www.larsgeorge.com/2009/10/hbase-architecture-101-storage.html) and
> I just wanted a clarification regarding how HBase reads
Hi Vishal,
These are DEBUG level messages and are from the block cache, there is
nothing wrong with that. Can you explain more what you do and see?
Lars
On Wed, Feb 16, 2011 at 4:24 AM, Vishal Kapoor
wrote:
> all was working fine and suddenly I see a lot of logs like below
>
> 2011-02-15 22:19:
Oh, so you have that much time? Easy then... ;)
Congrats Steven and the whole team, you are awesome!
On Tue, Feb 15, 2011 at 11:37 AM, Steven Noels wrote:
> On Mon, Feb 14, 2011 at 6:28 PM, Stack wrote:
>
> Congrats lads. Keep the releases coming.
>>
>
> Just one more and we hit 1.0. Let's mak
Hi SS,
Some people that do not need strict contiguous IDs also use block
increments of say 100. Each app server then gets 100 IDs to hand out
and in case it dies it gets its next assigned 100 IDs and leaves a
small gap behind. That way you can take the pressure of the counter if
that is going to b
wrote:
> Thank you but I would like to know what kinds of key value pairs that serve
> as the table descriptors in the values map. I know the column map stores the
> column name & column descriptor pair as the map entry. I should spend more
> time on the codes again tomorrow.
>
ng an MR job). Might
> be best to use the adaptive policies in some load tests, see what kind of
> new size it picks, and then hard code that with -Xmn from then on.
>
> Or if you just want a short answer with little tuning, 256m in a 6-8G total
> heap seems to work well for me with 2
Hi,
Did you read the comment above?
/**
* Private constructor used internally creating table descriptors for
* catalog tables: e.g. .META. and -ROOT-.
*/
Explains it, no?
Lars
On Fri, Feb 4, 2011 at 8:41 AM, Weishung Chung wrote:
> I am looking at the following protected HTableDesc
1 - 100 of 200 matches
Mail list logo