This was a bug in MapR. I got reply on MapR forum. If someone is facing
similar issue then refer below link.
http://answers.mapr.com/questions/163440/hbase-bulk-load-map-reduce-job-failing-on-mapr.html
On Wed, Jun 3, 2015 at 8:02 PM, Shashi Vishwakarma wrote:
> Hi
>
> Yes I am using MapR FS. I
Hi hbase users,
[Reposting this issue]
Hbase-client 0.94.x is working fine in karaf environment. We are working on a
task to use latest/stable hbase-client 1.0.x version. Since the hbase-client
requires hbase-common, having both referred in our pom.xml, classes from
package hbase-common.jar's-
Hi,Vladimir Rodionov
Thanks for the reply.
We encounter a problem is that:
we found there are some
SCAN operations (with startkey and endkey and filter) last for more than 1
hours, which lead to heavy network
traffic, because *some data is not **stored at local data node and the
region is very bi
Louis,
What do you mean by "monitor the long scan"? If you need to throttle
network IO during scan, you have to
do it on a client side. Take a look at
org.apache.hadoop.hbase.io.hadoopbackport.ThrottledInputStream
as an example, something similar you will need to implement on top of
ResultScanner
hi, Dave,
For now we will not upgrade the version, so if there is something we can
monitor the long scan for 0.96?
2015-06-11 2:00 GMT+08:00 Dave Latham :
> I'm not aware of anything in version 0.96 that will limit the scan for
> you - you may have to do it in your client yourself. If you're
>
Can you provide the full code for Conver() and Listclass?
Giving snippets of code is insufficient.
My suspicion is a bug in your code.
You might want to print out the output of Conver(next) before passing to
Listclass.add()
and print out the entire list of Listclass elements, during each iterat
threads?
So that regardless of your hadoop settings, if you want something faster, you
can use one thread for a timer and then the request is in another. So if you
hit your timeout before you get a response, you can stop your thread.
(YMMV depending on side effects… )
> On Jun 10, 2015, at 1
When in doubt, printf() can be your friend.
Yeah its primitive (old school) but effective.
Then you will know what you’re adding to your list for sure.
> On Jun 10, 2015, at 12:39 PM, beeshma r wrote:
>
> HI Devaraj
>
> Thanks for your suggestion.
>
> Yes i coded like this as per your sugge
Greetings HBase users and developers,
On the Apache HBase blog at https://blogs.apache.org/hbase we have just
published the first in a series of posts on "Why We Use Apache HBase", in
which we let HBase users and developers borrow our blog so they can
showcase their successful HBase use cases, tal
I'm not aware of anything in version 0.96 that will limit the scan for
you - you may have to do it in your client yourself. If you're
willing to upgrade, do check out the throttling available in HBase
1.1:
https://blogs.apache.org/hbase/entry/the_hbase_request_throttling_feature
On Wed, Jun 10,
You can utilize the following method of Scan:
public Scan setTimeRange(long minStamp, long maxStamp)
To apply Filter, use this:
public Scan setFilter(Filter filter) {
FYI
On Wed, Jun 10, 2015 at 10:33 AM, Devpriya Dave <
devpr...@yahoo-inc.com.invalid> wrote:
> Hello
> I am an Intern at Y
Hello
I am an Intern at Yahoo-Flickr, during one of the projects I wanted to scan the
hbase table first based on timestamp range and then apply some other filter.
However timestamp "range" filter is not availableIs there any way this can be
done i.e. apply timestamp "range" filter and other filt
Hi all,
We are using HBASE 0.96 with Hadoop 2.2.0, recently we found there are some
SCAN operations last for more than 1 hours, which lead to heavy network
traffic, because some data is not
stored at local data node and the region is very big, about 100G-500G。
With heavy network traffic, the regi
any ideas?
2015-06-10 16:01 GMT+08:00 娄帅 :
> Hi all,
>
> We are using HBASE 0.96 with Hadoop 2.2.0, recently we found there are some
> SCAN operations last for more than 1 hours, which lead to heavy network
> traffic, because some data is not
> stored at local data node and the region is very big
HI Devaraj
Thanks for your suggestion.
Yes i coded like this as per your suggestion.
public static void put_result(ResultScanner input) throws IOException
{
Iterator iterator = input.iterator();
while(iterator.hasNext())
{
Result next = iterator.next();
Listclass.
On Mon, Jun 8, 2015 at 10:27 PM, anil gupta wrote:
> So, if we have to match against non-string data in hbase shell. We should
> always use double quotes?
Double-quotes means the shell (ruby) will interpret and undo any escaping
-- e..g. showing as hex -- of binary characters. What we emit on t
Yes, Lets say, from hbase shell i would like to filter(
SingleColumnValueFilter) rows on basis of cell value that is stored as an
Int.
Lets assume the column name and value to be USER:AGE=5
On Tue, Jun 9, 2015 at 9:26 PM, Ted Yu wrote:
> bq. if we have to match against non-string data in hbase
Hi Talat,
That should should work.
Another example would be something like below.
test = LOAD '$TEST'
USING org.apache.pig.backend.hadoop.hbase.HBaseStorage('cf_data:name
cf_data:age', '-loadKey true -maxTimestamp $test_date')
as (age);
On Wed, Jun 10, 2015 at 12:57 PM, Talat Uyarer wrote:
>
Hi Ted Yu,
I guess Krishna mention about Pig's HBaseStorage class. I found out
this by searching the class on google. IMHO I find a solution for my
problem. I can use Scan.setTimeRange[0] method. If I want to get
smaller records from timestamp, minTimestamp is set 0 and maxtimestamp
is set the tim
19 matches
Mail list logo