under corrupt logs is
> a concerning thing. Need to look at that.
>
> -Anoop-
>
> On Sat, Jul 26, 2014 at 11:17 PM, Andrew Purtell
> wrote:
>
>> My attempt to reproduce this issue:
>>
>> 1. Set up Hadoop 2.4.1 namenode, secondarynamenode, and datanode
all existing wals are replayed)
>
> And files moved to old logs but not corrupt folder is something tobe
> checked. Any chance for a look there and patch Shankar?
>
> Anoop
>
>
> Anoop
>
>
>
>
>> On Sunday, July 27, 2014, Andrew Purtell wrote:
>
Let's take this to JIRA
On Wed, Jul 30, 2014 at 12:50 PM, Ted Yu wrote:
> In BaseDecoder#rethrowEofException() :
>
> if (!isEof) throw ioEx;
>
> LOG.error("Partial cell read caused by EOF: " + ioEx);
>
> EOFException eofEx = new EOFException("Partial cell read");
>
> eofEx.initC
You have a problem with your environment:
Caused by: java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.
NativeCodeLoader.buildSupportsSnappy()Z
Fix your native Hadoop libraries, or don't use Snappy.
This is not related to encryption.
On Fri, Aug 1, 2014 at 7:12 AM, Shankar hiremath <
shan
The 1st HBase 0.98.5 release candidate (RC0) is available for download
at http://people.apache.org/~apurtell/0.98.5RC0/ and Maven artifacts are
also available in the temporary repository
https://repository.apache.org/content/repositories/orgapachehbase-1027/
Signed with my code signing key D5
We have no known vulnerabilities that equate to a SQL injection attack
vulnerability. However, as Esteban says you'd want to treat HBase like any
other datastore underpinning a production service and out of an abundance
of caution deploy it into a secure enclave behind an internal service API,
so r
On Wed, Aug 6, 2014 at 11:20 PM, SiMaYunRui wrote:
> Further investigation showsthat if I repeatedly fetch data very quick, the
> latter scanner creations are very fast (< 100ms), but if there is >1minute
> interval between two data fetching, the latter is slow.
>
> I am certain that it’s not c
Trying out the at rest encryption feature is very much appreciated, but
perhaps we can spend a bit more time excluding other issues before
declaring an encryption bug. So far there hasn't been one (knock on wood!)
but a cursory search of user@hbase might imply the feature is buggy.
On Fri, Aug 8
d -hadoop2 bin tar ball
> > - checked contents, documentation, checksums, etc
> > - inserted some data, checked with raw-scans, flushed, compacted,
> inserted
> > again, checked again. All good.
> > - downloaded source tarball, checked contents, CHANGES.txt, etc.
>
st be specified.
> maxversions - To limit the number of versions of each column to be
> returned.
> batchsize - To limit the maximum number of values returned for each call
> to next().
> limit - The number of rows to return in the scan operation.
>
>
>
>
>
>
> 发自
gt; > LoadTestDataGeneratorWithVisibilityLabels.
> > > As it is an issue only with IT test and no code level issues, you can
> > take
> > > a call Andy.I have raised HBASE-11716 and attached a simple patch.
> > >
> > > -Anoop-
> &
For the first/initial patch, a diff against master attached to the issue
would be fine. If the changes are reviewed and are acceptable, then you and
we would look at back porting the changes to other branches, at which point
in time helping us out with branch specific patches could be helpful.
:
> +1 binding
>
> Checked signing.
> Checked rat
> Checked tar layout
> Ran locally
> performed some snapshots.
>
>
> On Mon, Aug 11, 2014 at 10:53 AM, Andrew Purtell
> wrote:
>
> > Thanks. I just committed HBASE-11716 for release in 0.98.6 at the end of
&g
Another resource is the Javadoc for the rest server package:
http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/rest/package-summary.html
On Wed, Aug 13, 2014 at 10:07 AM, Esteban Gutierrez
wrote:
> Hello Sean,
>
> Have you looked into the HBase wiki page for the REST server?
> http://wiki
Apache HBase 0.98.5 is now available for download. Get it from an Apache
mirror [1] or Maven repository.
The list of changes in this release can be found in the release notes [2]
or following this announcement.
Thanks to all who contributed to this release.
Best,
The HBase Dev Te
The latest stable version of HBase is 0.98.5.
The upgrade procedure for 0.94 -> 0.96 can be applied in the exact same
manner to 0.94 -> 0.98. There is no need to upgrade through 0.96 as an
intermediate step.
We discussed this recently and I expect we are going to stop supporting (as
a communi
Huge +1
On Tue, Aug 19, 2014 at 10:53 PM, Nick Dimiduk wrote:
> Our docs are getting a lot of love lately, courtesy of one Misty
> Stanley-Jones. As someone who joined this community by way of
> documentation, I'd like to say: Thank you, Misty!
>
> -n
>
--
Best regards,
- Andy
Problems
If using Java 7 and G1, you might want to look over:
https://software.intel.com/en-us/blogs/2014/06/18/part-1-tuning-java-garbage-collection-for-hbase
On Wed, Aug 20, 2014 at 8:26 AM, Jean-Marc Spaggiari <
jean-m...@spaggiari.org> wrote:
> I agree with Bryan.
>
> HBase start to have some GC diff
Cool, let me ping my former colleagues about that.
On Wed, Aug 20, 2014 at 11:34 AM, Bryan Beaudreault <
bbeaudrea...@hubspot.com> wrote:
> That blog post is awesome, I hadn't seen it before. Eagerly looking
> forward to parts 2 and 3.
>
>
>
>
> On Wed, Aug 20,
G1 is not the default collector for Java 7. You have to enable it
specifically with -XX:+UseG1GC
I believe the default collector for the server VM in 7 (and 8) is the
parallel collector, equivalent to specifying -XX:+UseParallelGC.
Yes, you need to not select more than one collector on the JVM co
See also the latest comment on the JIRA. :-)
On Thu, Aug 21, 2014 at 1:13 PM,
Srikanth Srungarapu wrote:
> Hi,
> Did you try taking a look at
>
> https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/RowMutations.html
> ?
>
> Thanks,
> Srikanth.
>
>
> On Thu, Aug 21, 2014 at 1:03 P
On Sat, Aug 23, 2014 at 12:11 PM, Johannes Schaback <
johannes.schab...@visual-meta.com> wrote:
> Exception in thread "defaultRpcServer.handler=5,queue=2,port=60020"
> java.lang.StackOverflowError
> at org.apache.hadoop.hbase.CellUtil$1.advance(CellUtil.java:210)
> at org.apache.ha
On Tue, Aug 26, 2014 at 4:25 PM, arthur.hk.c...@gmail.com <
arthur.hk.c...@gmail.com> wrote:
> Exception in thread "main" java.lang.RuntimeException: native snappy
> library not available: this version of libhadoop was built without snappy
> support.
You are almost there. Unfortunately the nati
If the 0.94 merge code doesn't work out the box we should fix that.
On Thu, Aug 28, 2014 at 11:26 AM, Bryan Beaudreault <
bbeaudrea...@hubspot.com> wrote:
> I've done it. This is the code I used:
> https://gist.github.com/bbeaudreault/7567385
>
> It comes from the hbase source, but is modified
What about providing the jstack as Lars suggested? That doesn't
require you to upgrade (yet)
0.94.23 is the same major version as 0.94.1. Upgrading to this version
is not the same process as a major upgrade from 0.92 to 0.94. Changes
like the split policy difference you mention don't happen in poi
0.98 is very similar to trunk and 0.96 is retired. It hasn't made
sense to make a differentiated guide for 0.98. The trunk version
applies just about everywhere. Where there is a difference in 0.98 it
is mentioned in the relevant section of the guide.
On Wed, Sep 10, 2014 at 2:06 PM, Jerry He wro
Thanks for writing in with this pointer Alex!
On Wed, Sep 10, 2014 at 11:11 AM, Alex Kamil wrote:
> I posted step-by-step instructions here on using Apache Hbase/Phoenix with
> Elasticsearch JDBC River.
>
> This might be useful to Elasticsearch users who want to use Hbase as a
> primary data stor
Apache HBase 0.98.6 is now available for download. Get it from an
Apache mirror [1] or Maven repository.
The list of changes in this release can be found in the release notes
[2] or following this announcement.
Thanks to all who contributed to this release.
Best,
The HBase Dev Team
1. http://w
Only where we touch the native Hadoop libraries I think. If you have
specified compression implemented with a Hadoop native library, like
snappy or lzo, and have forgotten to deploy 64 bit native libraries,
and move to this 64 bit environment, you won't be able to open the
affected table(s) until n
On Mon, Sep 15, 2014 at 4:28 PM, Jean-Marc Spaggiari
wrote:
> Do we have kind of native compression in PB?
Protobufs has its own encodings, the Java language bindings implement
them in Java.
--
Best regards,
- Andy
Problems worthy of attack prove their worth by hitting back. - Piet
Hein (
tro
wrote:
> Did this release make it to maven central? I can't seem to find it.
>
> On Wed, Sep 10, 2014 at 4:50 PM, Andrew Purtell wrote:
>
>> Apache HBase 0.98.6 is now available for download. Get it from an
>> Apache mirror [1] or Maven repository.
>>
Apache HBase 0.98.6.1 is a patch release for 0.98.6 fixing a
regression in 0.98.6 involving non-superuser table creation when
security is active (HBASE-11972). Please use this release instead of
0.98.6. (We have removed 0.98.6 distribution artifacts in the
mirrors.) Get 0.98.6.1 from an Apache mirr
Survey: Is anyone using the Thrift 2 interface?
Not here.
On Thu, Sep 18, 2014 at 2:24 PM, Stack wrote:
> On Thu, Sep 18, 2014 at 3:56 AM, Kiran Kumar.M.R
> wrote:
>
>> Hi,
>> Our customers were using Hbase-0.94 through thrift1 (C++ clients).
>> Now HBase is getting upgraded to 0.98.x
>>
>> I s
1 GB heap is nowhere enough to run if you're tying to test something
real (or approximate it with YCSB). Try 4 or 8, anything up to 31 GB,
use case dependent. >= 32 GB gives away compressed OOPs and maybe GC
issues.
Also, I recently redid the HBase YCSB client in a modern way for >=
0.98. See http
times I ran it, it'll hit a NullPointerException[1] ... but it
> definitely seems to point more at a problem in the older YCSB.
>
> [1] https://gist.github.com/joshwilliams/0570a3095ad6417ca74f
>
> Thanks for your help,
>
> -- Josh
>
>
>> On Thu, 2014-09-18 at 15
FWIW, I pushed a fix for that NPE
On Fri, Sep 19, 2014 at 9:13 AM, Andrew Purtell
wrote:
> Thanks for trying the new client out. Shame about that NPE, I'll look into it.
>
>
>
>> On Sep 18, 2014, at 8:43 PM, Josh Williams wrote:
>>
>> Hi Andrew,
>&g
ree it needs to be completed. I know
> I have been tardy on this and need to speed up. :( Darn work always comes
> in between.
>
> On Thu, Sep 18, 2014 at 11:48 PM, Andrew Purtell
> wrote:
>
>> Survey: Is anyone using the Thrift 2 interface?
>>
>> Not here
hbase-testing-util doesn't include code. It's a meta module used to collect
dependencies. Attach sources from hbase-server instead for the
HBaseTestingUtility class.
> On Sep 27, 2014, at 4:53 PM, Stephen Boesch wrote:
>
> I am trying to manually attach the sources for the HBaseTestingUtil t
> When HDFS gets tiered storage, we can revive this and put HBase's WAL on SSD
> storage.
I think the archival storage work just committed (and ported into branch-2?)
might be sufficient for a pilot of HBASE-6572. A few important changes were
filed late, like APIs for apps for setting policy an
Thanks for reporting this. Please see
https://issues.apache.org/jira/browse/HBASE-12141. Hope I've
understood the issue correctly. We will look into it.
On Wed, Oct 1, 2014 at 4:37 AM, Ian Brooks wrote:
> Hi,
>
> I have a java client that connects to hbase and reads and writes data to
> hbase.
are linked via openvpn rather than normal networking. So its
> possiblt that openvpn is doing something to the packets that is affecting
> this.
>
> -Ian
>
> On Wednesday 01 October 2014 09:12:05 Andrew Purtell wrote:
>> Thanks for reporting this. Please see
>> http
On Thu, Oct 2, 2014 at 11:17 AM, Buckley,Ron wrote:
> Also, once the original /hbase got mv'd, a few of the region servers did
> some flush's before they aborted. Those RS's actually created a new
> /hbase, with new table directories, but only containing the data from the
> flush.
Sounds lik
would have been lost entirely.
>
>> On Thu, Oct 2, 2014 at 11:26 AM, Andrew Purtell wrote:
>>
>> On Thu, Oct 2, 2014 at 11:17 AM, Buckley,Ron wrote:
>>
>>> Also, once the original /hbase got mv'd, a few of the region servers did
>>> some flush'
other RPC call per
> flush.
>
> esteban.
>
> --
> Cloudera, Inc.
>
>
> On Thu, Oct 2, 2014 at 11:26 AM, Andrew Purtell
> wrote:
>
> > On Thu, Oct 2, 2014 at 11:17 AM, Buckley,Ron wrote:
> >
> > > Also, once the original /hbase got mv'd, a few of t
14 if you count createNewFile :-)
http://search-hadoop.com/m/282AcZLDAp1. Maybe you could tap Andrew or Colin
on the shoulder Esteban?
On Thu, Oct 2, 2014 at 2:13 PM, Andrew Purtell wrote:
> It's not the round trip, it's the atomicity of the operation. Consider a
> rename h
On Thu, Oct 2, 2014 at 3:02 PM, Esteban Gutierrez
wrote:
> Another possibility is that we could
> live with createNonRecursive until FileSystem becomes fully deprecated and
>
> we can migrate to FileContext, perhaps for HBase 3.x?
>
Sure
> HBASE-11045 goes in
> the opposite direction to t
On Thu, Oct 2, 2014 at 3:02 PM, Esteban Gutierrez
wrote:
> I get that isDirectory is not atomic and not the best solution, but at
> least can provide an alternative to fail the operation without using the
> deprecated API or altering FileSystem
>
This is not an alternative solution because it's
> The cluster has 2 m1.large nodes.
That's the problem right there.
You need to look at c3.4xlarge or i2 instances as a minimum requirement. M1
and even M3 instance types have ridiculously poor IO.
On Tue, Oct 7, 2014 at 3:01 PM, Khaled Elmeleegy wrote:
> Thanks Nicolas, Qiang.
>
> I was able
Apache HBase 0.98.
7
is now available for download. Get it from an Apache mirror [1] or Maven
repository.
The list of changes in this release can be found in the release notes [2]
or following this announcement.
Thanks to all who contributed to this release.
Best,
The HBase Dev Team
1. http://
I think Dima's point was, please correct me if I am wrong, if you are looking
for HBase related work, then contributing something to the project that gets
your name associated with the technology in public will have relevant companies
taking an interest in you they wouldn't otherwise. This is de
The reason Ted mentioned ZooKeeper is it looks like you are experiencing
network partitions on a regular basis just long enough for ZooKeeper
clients to time out. Given the regularity of the occurrence, I think you
need to look for an external (non-software) cause. What else is going on at
midnight
//hbase.apache.org/book.html#d0e1440
> >
> > On Fri, Oct 31, 2014 at 1:13 PM, Andrew Purtell
> > wrote:
> >
> > > No, we'll still need a -hadoop1 and -hadoop2 munged build of 0.98. I'm
> > only
> > > suggesting removing support for version 1
Admittedly it's been *years* since I experimented with pointing a HBase
root at a s3 or s3n filesystem, but my (dated) experience is it could take
some time for newly written objects to show up in a bucket. The write will
have completed and the file will be closed, but upon immediate open attempt
t
And note this is any file, potentially, table descriptors, what have you.
S3 isn't a filesystem, we can't pretend it is one.
On Fri, Nov 7, 2014 at 10:13 AM, Andrew Purtell wrote:
> Admittedly it's been *years* since I experimented with pointing a HBase
> root at a s3 or s
Try this HBase YCSB client instead:
https://github.com/apurtell/ycsb/tree/new_hbase_client
The HBase YCSB driver in the master repo holds on to one HTable instance
per driver thread. We accumulate writes into a 12MB write buffer before
flushing them en masse. This is why the behavior you are seein
Apache HBase 0.98.8 is now available for download. Get it from an Apache
mirror [1] or Maven repository.
The list of changes in this release can be found in the release notes [2]
or following this announcement. This release contains a fix for a security
issue, please see HBASE-12536 [3] for more d
Zero downtime upgrade from 0.94 won't be possible. See
http://hbase.apache.org/book.html#d0e5199
On Mon, Dec 15, 2014 at 4:44 PM, Jeremy Carroll wrote:
>
> Looking for guidance on how to do a zero downtime upgrade from 0.94 -> 0.98
> (or 1.0 if it launches soon). As soon as we can figure this ou
: 0.94 going forward
> > >
> > > Does replication and snapshot export work from 0.94.6+ to a 0.96 or
> 0.98
> > > cluster?
> > >
> > > Presuming it does, shouldn't a site be able to use a multiple cluster
> set
> > > up to do a cut o
I believe HTableMultiplexer[1] is meant to stand in for HTablePool for
buffered writing. FWIW, I've not used it.
1:
https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTableMultiplexer.html
On Fri, Dec 19, 2014 at 9:00 AM, Nick Dimiduk wrote:
>
> Hi Aaron,
>
> Your analysis is spot
super confusing that HTableInterface exposes setAutoFlush() and
> setWriteBufferSize(), given that the writeBuffer won't meaningfully buffer
> anything if all tables are short-lived.
>
> [1] https://issues.apache.org/jira/browse/HBASE-12728
>
> On Fri, Dec 19, 2014 at 10:31
dropping writes are not desirable
> in the general case. Further, my understanding was that the new connection
> implementation is designed to handle this kind of use-case (hence cc'ing
> Lars).
>
> On Fri, Dec 19, 2014 at 11:02 AM, Andrew Purtell
> wrote:
> >
> >
ement?
>
> -Solomon
>
> On Fri, Dec 19, 2014 at 2:19 PM, Andrew Purtell
> wrote:
> >
> > I don't like the dropped writes either. Just pointing out what we have
> now.
> > There is a gap no doubt.
> >
> > On Fri, Dec 19, 2014 at 11:16 AM, Nick Dimid
Apache HBase 0.98.9 is now available for download. Get it from an Apache
mirror [1] or Maven repository.
The list of changes in this release can be found in the release notes [2]
or following this announcement.
Thanks to all who contributed to this release.
Best,
The HBase Dev Team
1. http://ww
Thanks. It's a known issue that we are working to resolve. Sorry about the
flub.
On Wed, Jan 14, 2015 at 10:40 AM, Glenn, James wrote:
> FYI:
> The links from hbase.apache.org to the book (documentation) are not
> correct or working.
> -Jim
>
>
--
Best regards,
- Andy
Problems worthy of
> 2015-02-09 20:55:57,640 INFO [M:0;hp:38689] zookeeper.RecoverableZooKeeper:
Node /hbase/namespace/hbase already exists and this is not a retry
> [...]
>
Questions:Does the above KeeperException indicate a problem?
> How do I wipe my laptop clean of hbase and zookeeper?
No, that is not an error
As Ted suggests it's best to recompile the source distribution of HBase
against more recent versions of Hadoop then what we've compiled our
convenience binaries against. Hadoop often makes incompatible changes
across point releases, which can also extend to dependencies (versions of
guava or protob
e 0.98.9-hadoop2, but the build produces artifacts without that
> hadoop1/2 discriminator. Is that added afterwards somehow? Do I need to
> care about it if I built my own version using the hadoop2 profile?
>
> Thanks in advance,
> Ian
>
> > On Feb 10, 2015, at 1:33 PM, Andrew
This is how we build HBase in Bigtop:
Common files:
https://github.com/apache/bigtop/tree/master/bigtop-packages/src/common/hbase
RPM specfile:
https://github.com/apache/bigtop/blob/master/bigtop-packages/src/rpm/hbase/SPECS/hbase.spec
Look at do-component-build in common/ especially.
If
Apache HBase 0.98.10.1 is now available for download. Get it from an Apache
mirror [1] or Maven repository.
The list of changes in this release can be found in the release notes
[2][3] or following this announcement.
Thanks to all who contributed to this release.
Best,
The HBase Dev Team
1. htt
> What is hbase's philosophy in this? Does it allow some degree of data
loss?
HBase doesn't "allow" data loss, in the sense that HBase never chooses on
its own to be less than fully durable. However, our client API does allow
users to submit mutations with different durability guarantees.
The def
We simply use MD5 to get a hash where collision probability is very small.
There's no security implication, we don't use MD5 here to protect anything
in a cryptographic sense. In fact we could probably use a faster algorithm
with weaker collision properties for this, but MD5 is ok.
On Tue, Feb 17,
Congratulations, all!
On Tue, Feb 24, 2015 at 12:30 AM, Enis Söztutar wrote:
> The HBase Team is pleased to announce the immediate release of HBase 1.0.0.
> Download it from your favorite Apache mirror [1] or maven repository.
>
> HBase 1.0.0 is the next stable release, and the start of "semanti
What vendor/version/release corresponds with version
"0.98.0.2.1.2.1-471-hadoop2" ? I've not seen that before.
We did recently analyze and fix an issue involving the flush queue, see
HBASE-10499 (https://issues.apache.org/jira/browse/HBASE-10499). This was
released in 0.98.10. I'm not definitively
Spark supports creating RDDs using Hadoop input and output formats (
https://spark.apache.org/docs/1.2.1/api/scala/index.html#org.apache.spark.rdd.HadoopRDD)
. You can use our TableInputFormat (
https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/TableInputFormat.html)
or TableOutput
Your best bet is to look at the examples provided in the hbase-examples
module, f.e.
https://github.com/apache/hbase/blob/branch-1.0/hbase-examples/src/main/java/org/apache/hadoop/hbase/coprocessor/example/RowCountEndpoint.java
or
https://github.com/apache/hbase/blob/branch-1.0/hbase-examples/src/m
> I think the final issue with hadoop-common (re: unimplemented sync for local
filesystems) is the one showstopper for us.
Although the unnecessary overhead would be significant, you could run a
stripped down HDFS stack on the VM. Give the NameNode, SecondaryNameNode,
and DataNode 1GB of heap only
... And if you have at most "small data" at this stage, you might be able
to cut the heap sizes of the HDFS daemons in half.
On Fri, Mar 6, 2015 at 2:18 PM, Andrew Purtell wrote:
> > I think the final issue with hadoop-common (re: unimplemented sync for local
> file
Apache HBase 0.98.11 is now available for download. Get it from an Apache
mirror [1] or Maven repository.
The list of changes in this release can be found in the release notes [2]
or following this announcement.
Thanks to all who contributed to this release.
Best,
The HBase Dev Team
1. http://w
We're hitting this type of HDFS issue in production too. Your best option
is to kill the regionserver process forcefully, start a replacement, and
let the region(s) affected recover. All edits should be persisted to the
WAL regardless of what Ted said about flushing.
We are working on the problem,
Is there a particular reason why you are using HBase 0.98.0? The latest
0.98 release is 0.98.11. There's a known performance issue with 0.98.0
pertaining to RPC that was fixed in later releases, you should move up from
0.98.0. In addition hundreds of improvements and bug fixes have gone into
the te
On Tue, Mar 17, 2015 at 9:47 PM, Stack wrote:
>
> > If it's possible to recover all of the file except
> > a portion of the affected block, that would be OK too.
>
> I actually do not see a 'fix' or 'recover' on the hfile tool. We need to
> add it so you can recover all but the bad block (we sho
You've missed HBASE-12972 and HBASE-12975. See the discussion on those
issues why 1.1.0.
On Wed, Mar 25, 2015 at 3:05 PM, Nick Dimiduk wrote:
> With about 100 issues beyond the 1.0.x line [0], I think it's time to start
> talking about our 1.1.0 release. Noteworthy goodness already committed to
On behalf of the Apache HBase PMC I"m pleased to announce that Sean Busbey
has accepted our invitation to become a PMC member on the Apache HBase
project. Sean has been an active and positive contributor in many areas,
including on project meta-concerns such as versioning, build
infrastructure, cod
On behalf of the Apache HBase PMC, I am pleased to announce that Srikanth
Srungarapu has accepted the PMC's invitation to become a committer on the
project. We appreciate all of Srikanth's hard work and generous
contributions thus far, and look forward to his continued involvement.
Congratulations
On behalf of the Apache HBase PMC, I am pleased to announce that Jerry He
has accepted the PMC's invitation to become a committer on the project. We
appreciate all of Jerry's hard work and generous contributions thus far,
and look forward to his continued involvement.
Congratulations and welcome,
> Proving it to yourself is sometimes the hardest part!
Yes!
On Mon, Mar 2, 2015 at 2:17 PM, Gary Helmling wrote:
> Proving it to yourself is sometimes the hardest part!
>
> On Mon, Mar 2, 2015 at 2:11 PM Nick Dimiduk wrote:
>
> > Gary to the rescue! Does it still count as being right even i
> Sorry, there is something I asked wrongly because I was understanding it
wrongly.
> 1 region server correspond to 1 namenode and 1 write to 1 name node will
replicate to 3 datanodes...
No, but this may just be a terminology problem.
The NameNode isn't an HBase daemon, it's HDFS.
HDFS writers,
This is one person's opinion, to which he is absolutely entitled to, but
blanket black and white statements like "coprocessors are poorly
implemented" is obviously not an opinion shared by all those who have used
them successfully, nor the HBase committers, or we would remove the
feature. On the ot
ny case and in all seriousness. Michael, feel free to educate
> > yourself about what the intended use of coprocessors is - preferably
> before
> > you come here and start an argument ... again. We're more than happy to
> > accept a patch from you with a "correct" i
Yeah the pointer for REST API should send users to the package docs, not
the wiki
On Thu, Apr 16, 2015 at 10:24 AM, Nick Dimiduk wrote:
> On Wed, Apr 15, 2015 at 11:27 PM, anil gupta
> wrote:
>
> > Also, it would be nice if we could move docs of startgate from here:
> > https://wiki.apache.org
Looks fine for me, Chrome and Firefox tested. As Nick says Looks like the
CSS asset didn't load at Anil's location for whatever reason.
On Thu, Apr 16, 2015 at 8:36 AM, Stack wrote:
> Are others running into the issue Anil sees?
> Thanks,
> St.Ack
>
> On Thu, Apr 16, 2015 at 8:13 AM, anil gupta
I don't believe we can self-host static assets cheaply enough relative to what
CDNs charge.
> On Apr 19, 2015, at 10:52 AM, Josh Elser wrote:
>
> I remember one case where my browsers just hung fetching things from the
> bootstrap CDN. I assumed it was due to some issue on their side because
Apache HBase 0.98.12 is now available for download. Get it from an Apache
mirror [1] or Maven repository.
The list of changes in this release can be found in the release notes [2]
or following this announcement.
Thanks to all who contributed to this release.
Best,
The HBase Dev Team
1. http://w
This is a VOTE thread. This discussion is highly off topic. Please drop dev@
from the CC and change the subject.
> On Apr 30, 2015, at 7:30 AM, Ted Yu wrote:
>
> And the following:
>
>
>org.apache.hbase
>hbase-protocol
>${hbase.version}
>
>
>
So we're also dropping support for 2.2 for releases 1.0.x too ?
On Thu, Apr 30, 2015 at 4:43 PM, Stack wrote:
> On Thu, Apr 30, 2015 at 4:26 PM, Enis Söztutar wrote:
>
> > The build is broken with Hadoop-2.2 because mini-kdc is not found:
> >
> > [ERROR] Failed to execute goal on project hbase-
> to drop support for 2.2, or sink the RC and fix the isssue.
>
> Enis
>
> On Thu, Apr 30, 2015 at 9:12 AM, Andrew Purtell
> wrote:
>
> > This is a VOTE thread. This discussion is highly off topic. Please drop
> > dev@ from the CC and change the subject.
&
I prefer to patch the POMs.
> On May 5, 2015, at 4:16 PM, Nick Dimiduk wrote:
>
> So what's the conclusion here? Are we dropping 2.2 support or updating the
> poms and sinking the RC?
>
>> On Fri, May 1, 2015 at 7:47 AM, Sean Busbey wrote:
>>
>> O
15 at 10:13 AM, Andrew Purtell
> wrote:
>
>> I prefer to patch the POMs.
>
> Is this a formal -1?
>
> I've opened HBASE-13637 for tracking this issue. Let's get it fixed and
> I'll spin a new RC tonight.
>
>>> On May 5, 2015, at 4:16 PM, Nick Dim
> From: Bradford Stephens
> [...] I'm trying to do gets by using JSONP, which
> embeds/retrieves requests in
I think generally people are building their own HBase AMIs for use up on EC2,
but I'd like to announce there are new public AMIs available in all of the AWS
regions:
HBase 0.20.6
us-east-1
ami-2469834d
apache-hbase-images-us-east-1/hbase-0.20.6-i386.manifest.xml
ami-2c698345
1 - 100 of 957 matches
Mail list logo