Thanks Ted,
I also got Mapkeeper error, comment the module works around it(
https://github.com/brianfrankcooper/YCSB/issues/152)
On Wed, Oct 22, 2014 at 10:23 AM, Ted Yu wrote:
> Once you clone ycsb, you should build it with your choice of 0.98
>
> Here's thread where Andrew mentioned his ycsb r
As an aside, if there are changes we'd like to see in YCSB upstream has
started taking patches again with a bit of prodding.
On Tue, Oct 21, 2014 at 9:23 PM, Ted Yu wrote:
> Once you clone ycsb, you should build it with your choice of 0.98
>
> Here's thread where Andrew mentioned his ycsb repo:
Once you clone ycsb, you should build it with your choice of 0.98
Here's thread where Andrew mentioned his ycsb repo:
http://search-hadoop.com/m/DHED4NaxYb1/andrew+purtell+ycsb+2014&subj=Re+Performance+oddity+between+AWS+instance+sizes
Cheers
On Tue, Oct 21, 2014 at 7:15 PM, Qiang Tian wrote:
Thanks Ted.
do you mean I should rebuild the ycsb? could you point me Andrew's repo?
On Tue, Oct 21, 2014 at 5:37 PM, Ted Yu wrote:
> Cycling bits:
> http://search-hadoop.com/m/DHED4N0syk1
>
> Andrew has his ycsb repo as well.
>
> Cheers
>
> On Oct 21, 2014, at 2:28 AM, Qiang Tian wrote:
>
> >
See this blog post:
http://www.flurry.com/2012/12/06/exploring-dynamic-loading-of-custom-filters-i#.VEcNtNR4rZg
Cheers
On Tue, Oct 21, 2014 at 6:48 PM, Kevin wrote:
> Also, if you do end up using dynamic loading, you'll need a way to version
> your filters because the RS will not reload a JAR i
Also, if you do end up using dynamic loading, you'll need a way to version
your filters because the RS will not reload a JAR if it changes.
On Tue, Oct 21, 2014 at 9:46 PM, Kevin wrote:
> I haven't tried dynamic loading of filters on RS, but I know it does
> exist. See https://issues.apache.org/
I haven't tried dynamic loading of filters on RS, but I know it does exist.
See https://issues.apache.org/jira/browse/HBASE-9301.
If you still can't get it to work, then I suggest distributing your filters
to the RS and restart them. Let us know how everything works out.
On Tue, Oct 21, 2014 at 9
Hi,
I've read that on a modern hardware I should increase the value of
io.file.buffer.size parameter of HDFS, up to 128kb or so. [1] Does this
advice still hold true in the context of HBase? We've done a series of
performance benchmarks with the different values of it, but couldn't
observe a notic
Thanks Kevin!
I was under impression, probably mistakingly, that as of 0.96 placing
the filter on hdfs under hbase lib directory is sufficient and RS should
load the filter dynamically from hdfs. Is that not the case?
On Tuesday, October 21, 2014, Kevin wrote:
> BTW, the error looks like you di
Did you restart HMaster? You can check the master's runtime conf at
/conf and that should show this config.
On Sun, Oct 19, 2014 at 6:00 PM, ch huang wrote:
> thanks for reply,but i am not deply the cluster use cloudera manager ,
> that's information is not applicable!
>
> On Fri, Oct 17, 2014 a
BTW, the error looks like you didn't distribute your custom filter to your
region servers.
On Tue, Oct 21, 2014 at 1:34 PM, Kevin wrote:
> Matt,
>
> You should create your own proto file and compile that with the Google
> Protocol Buffer compiler. Take a look at the SingleColumnValueFilter's
> c
All machines use ipv4
On Tue, Oct 21, 2014 at 1:36 PM, Ted Yu wrote:
> Do you use ipv6 ?
>
> If so, this is related:
> HBASE-12115
>
> Cheers
>
> On Tue, Oct 21, 2014 at 10:26 AM, Kevin wrote:
>
> > Hi,
> >
> > I have connected a client machine with two network interfaces to an
> > internal, is
Do you use ipv6 ?
If so, this is related:
HBASE-12115
Cheers
On Tue, Oct 21, 2014 at 10:26 AM, Kevin wrote:
> Hi,
>
> I have connected a client machine with two network interfaces to an
> internal, isolated HBase cluster and an external network. The HBase cluster
> is on its own private LAN, a
Matt,
You should create your own proto file and compile that with the Google
Protocol Buffer compiler. Take a look at the SingleColumnValueFilter's
code:
https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/filter/SingleColumnValueFilter.java#L327
You wil
Thanks for you replies Jean,Dhaval
On Tue, Oct 21, 2014 at 6:57 PM, Dhaval Shah
wrote:
> You can achieve what you want using versions and some hackery with
> timestamps
>
>
> Sent from my T-Mobile 4G LTE Device
>
>
> Original message
> From: Jean-Marc Spaggiari
> Date:10/21/20
Hi,
I have connected a client machine with two network interfaces to an
internal, isolated HBase cluster and an external network. The HBase cluster
is on its own private LAN, away from the external network. After installing
and updating the Hadoop and HBase configuration files on the client
machin
bq. When using Delete#deleteColumns everything seems to be working fine
Please confirm that the issue you observe was with Delete#deleteColumn
(different from the method mentioned in subject).
Can you tried with 0.94.24 (the latest 0.94 release) ?
If you can capture this using a unit test, that
Thanks All.I will get back if I take that direction.
-Nishanth
On Tue, Oct 21, 2014 at 8:15 AM, Ted Yu wrote:
> The link is about Cassandra, not hbase.
>
> Cheers
>
> On Tue, Oct 21, 2014 at 2:53 AM, Qiang Tian wrote:
>
> > Do you want some sql-on-hadoop could access hbase file directly?
> > I
snapshots are off by default in 0.94 because it is a new feature backported
from the 0.96 branch.
from 0.96 snapshots are on by default.
Matteo
On Tue, Oct 21, 2014 at 4:34 PM, Serega Sheypak
wrote:
> Hi, I tried to create a snapshot and it's not enabled
>
> : java.io.IOException: java.lang.Un
Hi, I tried to create a snapshot and it's not enabled
: java.io.IOException: java.lang.UnsupportedOperationException: To use
snapshots, You must add to the hbase-site.xml of the HBase Master:
'hbase.snapshot.enabled' property with value 'true'.
Is it by-default? If yes, then why? What is the reas
Hi all,
we are using HBase version 0.94.6-cdh4.3.1 and I have a suspicion that a
Delete written to hbase through HFileOutputFormat might be ignored (and
not delete any data) in the following scenario:
* a Delete object is used to delete the data at the client side
* call to "deleteColumn" in
The link is about Cassandra, not hbase.
Cheers
On Tue, Oct 21, 2014 at 2:53 AM, Qiang Tian wrote:
> Do you want some sql-on-hadoop could access hbase file directly?
> I did a quick search and find
> http://www.slideshare.net/Stratio/integrating-sparkandcassandra(P35), but
> not sure if I unders
You can achieve what you want using versions and some hackery with timestamps
Sent from my T-Mobile 4G LTE Device
Original message
From: Jean-Marc Spaggiari
Date:10/21/2014 9:02 AM (GMT-05:00)
To: user
Cc:
Subject: Re: Duplicate Value Inserts in HBase
You can do che
You can do check and puts to validate if value is already there, but it's
slower...
2014-10-21 8:50 GMT-04:00 Krishna Kalyan :
> Thanks Jean,
> If i put the same value in my table for a particular column for a rowkey i
> want HBase reject this value and retain old value with old time stamp.
> In
Thanks Jean,
If i put the same value in my table for a particular column for a rowkey i
want HBase reject this value and retain old value with old time stamp.
In other words update only when value changes.
Regards,
Krishna
On Tue, Oct 21, 2014 at 6:02 PM, Jean-Marc Spaggiari <
jean-m...@spaggiari
Hi Krishna,
HBase will store them in the same row, same cell but you will have 2
versions. If you want to keep just one, setup the version=1 on the table
side and only one will be stored. Is that what yo mean?
JM
2014-10-21 8:29 GMT-04:00 Krishna Kalyan :
> Hi,
> I have a HBase table which is p
Hi,
I have a HBase table which is populated from pig using PigStorage.
While inserting, suppose for rowkey i have a duplicate value.
Is there a way to prevent an update?.
I want to maintain the version history for my values which are unique.
Regards,
Krishna
Do you want some sql-on-hadoop could access hbase file directly?
I did a quick search and find
http://www.slideshare.net/Stratio/integrating-sparkandcassandra(P35), but
not sure if I understand correctly.
On Tue, Oct 21, 2014 at 12:15 PM, Nick Dimiduk wrote:
> Not currently. HBase uses it's own
Cycling bits:
http://search-hadoop.com/m/DHED4N0syk1
Andrew has his ycsb repo as well.
Cheers
On Oct 21, 2014, at 2:28 AM, Qiang Tian wrote:
> Hi Guys,
> I am running YCSB 0.1.4 against hbase 0.98.5,
>
> "bin/ycsb load hbase -P workloads/workloada -p columnfamily=f1 -p
> recordcount=1000 -p
Hi Guys,
I am running YCSB 0.1.4 against hbase 0.98.5,
"bin/ycsb load hbase -P workloads/workloada -p columnfamily=f1 -p
recordcount=1000 -p threadcount=4 -s | tee -a workloada.dat" stucked as
below:
10 sec: 0 operations;
20 sec: 0 operations;
30 sec: 0 operations;
40 sec: 0 operations;
50 s
30 matches
Mail list logo