Hello,
I am running a ycsb instance to insert data into hbase.All was well when
this was against hbase 0.96.1.Now I am trying to run the same program to
another cluster which is configured with hbase 0.98.4.I get the below
error on the client side.Could some one help me with this?.
The znode
Do you have replication turned on in hbase and if so is your slave
consuming the replicated data?.
-Nishanth
On Wed, Feb 25, 2015 at 10:19 AM, Madeleine Piffaretti <
mpiffare...@powerspace.com> wrote:
> Hi all,
>
> We are running out of space in our small hadoop cluster so I was checking
> dis
Hello,
I have a field which is indexed and stored in the solr schema( 4.4.solr
cloud).This field is relatively huge and I plan to only index the field
and not to store.Is there a need to re-index the documents once this
change is made?.
Thanks,
Nishanth
t's pretty high not as much 16 to 32.
> > I only use one yscb, could it be that important?
> >
> > -threads : the number of client threads. By default, the YCSB Client
> > uses a single worker thread, but additional threads can be specified.
> > This is often done
Please ignore..
On Fri, Jan 30, 2015 at 10:39 AM, Nishanth S
wrote:
> Hello,
>
> I have a field which is indexed and stored in the solr schema( 4.4.solr
> cloud).This field is relatively huge and I plan to only index the field
> and not to store.Is there a need to re-index
ERSIONS => '0', TTL
> >> => 'FOREVER', KEEP_DELETED_CELLS => '
> >> false', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE =>
> 'true'}
> >> 1 row(s) in 0.0170 seconds
> >>
> >>
You can use ycsb for this purpose.See here
https://github.com/brianfrankcooper/YCSB/wiki/Getting-Started
-Nishanth
On Wed, Jan 28, 2015 at 1:37 PM, Guillermo Ortiz
wrote:
> Hi,
>
> I'd like to do some benchmarks fo HBase but I don't know what tool
> could use. I started to make some code but I
HI,
We were running an hbase cluster with replication enabled.How ever we have
moved away from replication and turned this off.I also went ahead and
removed the peers from hbase shell.How ever the oldwals directory is not
cleaned up.I am using hbase version 0.96.1. Is it safe enough to delete
Hi All,
I am running a map reduce job which scans the hbase table for a particular
time period and then creates some files from that.The job runs fine for 10
minutes or so and few around 10% of maps get completed succesfully.Here is
the error that I am getting.Can some one help?
15/01/22 19:34:
> It doesn't support dealing with salted rowkeys (or reverse timestamps) out
> of the box, so you may have to munge with the data a little bit after it's
> loaded to get what you want.
>
> Hope this helps.
> Pradeep
>
> On Fri Dec 05 2014 at 9:55:04 AM Nishanth S
&g
Hey folks,
I am trying to write a map reduce in pig against my hbase table.I have a
salting in my rowkey appended with reverse timestamps ,so I guess the best
way is to do a scan for all the dates that I require to pull out
records.Does any one know if pig supports hbase scan out of the box or
en2...@gmail.com> wrote:
> Nishanth,
> What version of HBase you are using?
>
> You can try clear the ZNode about regionserver list in zookeeper
> /hbase/ and then restart HMaster.
>
> --
> yeweichen2...@gmail.com
>
>
> *From:*
ear the dead region
> servers is to restart the master daemon.
>
> -Pere
>
> On Mon, Nov 3, 2014 at 9:49 AM, Nishanth S
> wrote:
>
> > Hey folks,
> >
> > How do I remove a dead region server?.I manually failed over the hbase
> > master but this is still
Hey folks,
How do I remove a dead region server?.I manually failed over the hbase
master but this is still appearing in master UI and also on the status
command that I run.
Thanks,
Nishan
Can you telnet to port 2181 and 60020 on the remote cluster if you are
running default ports.I had a similar issue in the past where there was
firewall.
Thanks,
Nishanthon
On Sun, Oct 26, 2014 at 9:39 AM, Ted Yu wrote:
> Is hbase-site.xml corresponding to your cluster on the classpath of your
things are necessary for HBase to
> > > support a columnar format such as parquet or orc; no such investigation
> > has
> > > been undertaken that I am aware of.
> > >
> > > Thanks,
> > > Nick
> > >
> > > On Monday, October 20, 20
Hey folks,
I have been reading a bit about parque and how hive and impala works well
on data stored in parque format.Is it even possible to do the same with
hbase to reduce storage etc..
Thanks,
Nishanth
're various unit tests for Filters in hbase code.
>
> Cheers
>
> On Wed, Oct 15, 2014 at 2:30 PM, Nishanth S
> wrote:
>
> > Hi Ted ,
> > Since I am also working on similar thing is there a way we can first
> test
> > the filter on client side?.You kn
Hi Ted ,
Since I am also working on similar thing is there a way we can first test
the filter on client side?.You know what I mean without disrupting others
who are using the same cluster for other work?
Thanks,
Nishanth
On Wed, Oct 15, 2014 at 3:17 PM, Ted Yu wrote:
> bq. Or create a new fi
oad tool moves files, not copies them. So once written you
> will not do any additional writes (except for those regions which was split
> while you filtering data). If importing data is small that would not be a
> problem.
>
> On Wed, Oct 8, 2014 at 8:45 PM, Nishanth S
> wrote:
&g
stead. They can
> be configurable and loadable (but not
> unloadable, so you need to think about some class loading magic like
> ClassWorlds)
> For bulk imports you can create HFiles directly and add them incrementally:
> http://hbase.apache.org/book/arch.bulk.load.html
>
> On Wed
?
Thanks,
-Nishan
On Wed, Oct 8, 2014 at 9:50 AM, Nishanth S wrote:
> Hey folks,
>
> I am evaluating on loading an hbase table from parquet files based on
> some rules that would be applied on parquet file records.Could some one
> help me on what would be the best
Hey folks,
I am evaluating on loading an hbase table from parquet files based on
some rules that would be applied on parquet file records.Could some one
help me on what would be the best way to do this?.
Thanks,
Nishan
at 10:49 AM, Ted Yu wrote:
> Can you give a bit more detail, such as:
>
> the release of HBase you're using
> number of column families where slowdown is observed
> size of cluster
> release of hadoop you're using
>
> Thanks
>
> On Mon, Sep 29, 2014 at 9:43
possible or am I doing some thing wrong.
-Nishan
On Thu, Sep 25, 2014 at 11:56 AM, Ted Yu wrote:
> There should not be impact to hbase write performance for two column
> families.
>
> Cheers
>
> On Thu, Sep 25, 2014 at 10:53 AM, Nishanth S
> wrote:
>
> > Thank yo
Thank you Ted.
-Nishan
On Thu, Sep 25, 2014 at 11:56 AM, Ted Yu wrote:
> There should not be impact to hbase write performance for two column
> families.
>
> Cheers
>
> On Thu, Sep 25, 2014 at 10:53 AM, Nishanth S
> wrote:
>
> > Thank you Ted.No I do not plan
ery, you can designate the smaller column family as essential
> column family where smaller columns are queried.
>
> Cheers
>
> On Thu, Sep 25, 2014 at 9:57 AM, Nishanth S
> wrote:
>
> > Hi everyone,
> >
> > This question may have been asked many times but I would
Hi everyone,
This question may have been asked many times but I would really appreciate
if some one can help me on how to go about this.
Currently my hbase table consists of about 10 columns per row which in
total has an average size of 5K.The chunk of the size is held by one
particular col
needs to be deleted.
-Nishan
On Wed, Sep 24, 2014 at 12:14 PM, Jean-Marc Spaggiari <
jean-m...@spaggiari.org> wrote:
> Hi Nishan,
>
> What you are looking for is HBASE-11764
> <https://issues.apache.org/jira/browse/HBASE-11764> and not available yet.
>
> JM
>
>
Hi All,
We were using TTL feature to delete the hbase data since we were able to
define the retention days at column family level.But right now we have a
requirement for storing data with different retention period in this
column family.So we would need to do a select and delete.What would be
Hi folks,
We have a hbase table with 4 column families which stores log data.The
columns and the content stored on each of these column families are the
same. The reason for having multiple families is that we needed 4 retention
buckets for messages and were using the TTL feature of hbase to
Hi folks,
We have a hbase table with 4 column families which stores log data.The
columns and the content stored on each of these column families are the
same. The reason for having multiple families is that we needed 4 retention
buckets for messages and were using the TTL feature of hbase to
; in your custom Filter.
>
> JFYI
>
> -Anoop-
>
> On Fri, Sep 12, 2014 at 4:36 AM, Nishanth S
> wrote:
>
> > Sure Sean.This is much needed.
> >
> > -Nishan
> >
> > On Thu, Sep 11, 2014 at 3:57 PM, Sean Busbey
> wrote:
> >
> > &
t;
> [1]: https://issues.apache.org/jira/browse/HBASE-11950
>
> On Thu, Sep 11, 2014 at 4:40 PM, Ted Yu wrote:
>
> > See http://search-hadoop.com/m/DHED4xWh622
> >
> > On Thu, Sep 11, 2014 at 2:37 PM, Nishanth S
> > wrote:
> >
> > > Hey All,
> > >
Hey All,
I am sorry if this is a naive question.Do we need to generate a proto file
using proto buffer compiler when implementing a filter.I did not see that
any where in the documentation.Can some one help please?
On Thu, Sep 11, 2014 at 12:41 PM, Nishanth S
wrote:
> Thanks Dima and Ted.
understand your use case correctly, you might want to look at
> > RegexStringComparator to match the first 1000 characters of your column
> > qualifier.
> >
> > -Dima
> >
> > On Thu, Sep 11, 2014 at 12:37 PM, Nishanth S
> > wrote:
> >
> > >
Hi All,
I have an hbase table with multiple cfs (say c1,c2,c3).Each of this column
family has a column 'message' which is about 5K.What I need to do is to
grab only the first 1000 Characters of this message when I do a get on the
table using row Key.I was thinking of using filters to do this on
Also take a look at HBASE-5416 which introduced essential column family.
>
> Cheers
>
>
> On Fri, Aug 22, 2014 at 8:41 AM, Nishanth S
> wrote:
>
> > Hi everyone,
> >
> > We have an hbase implementation where we have a single table which
> stores
> &g
Hi everyone,
We have an hbase implementation where we have a single table which stores
different types of log messages.We have a requirement to notify (send
email to mailing list) when we receive a particular type of message.I will
be able to able to identify this type of message by looking a
39 matches
Mail list logo