Ok grepping the RS logs I see nothing with 'local' in any of them. Thanks
for that hint.
For the test I was using, I know it is data local. Every map task launched
data local, and no regions were moving recently.
I think I've hijacked this thread enough, I'll move my issues to another.
;-)
On
Hi Robert
When HDFS is doing the local short circuit read, it will use
BlockReaderLocal class for reading. There should be some logs at the DFS
client side (RS) which tells abt creating new BlockReaderLocal . If you can
see this then sure the local read is happening.
Also check DN l
Not trying to hijack your thread here...
But can you verify via logs that the shortcircuit is working? Because I
enabled shortcircuit but I sure didn't see any performance increase.
I haven't tried enabling hbase checksum yet but I'd like to be able to
verify that works too.
On Thu, Jan 31, 20
You'll find Jackson, which includes Jackson's JSON processor.
On Thursday, January 31, 2013, Wei Tan wrote:
> We need to parse JSON in a coprocessor and if HBase /lib directory
> contains any json processing util, we can avoid introducing additional
> jars.
> Thanks!
>
>
> Best Regards,
> Wei
>
You can check with HDFS level logs whether the checksum meta file is getting
read to the DFS client? In the HBase handled checksum, this should not happen.
Have you noticed any perf gain when you configure the HBase handled checksum
option?
-Anoop-
From:
We have this dependency:
com.sun.jersey
jersey-json
${jersey.version}
See:
http://jersey.java.net/nonav/documentation/latest/json.html
On Thu, Jan 31, 2013 at 5:03 PM, Wei Tan wrote:
> We need to parse JSON in a coprocessor and if HBase /lib directory
> cont
We need to parse JSON in a coprocessor and if HBase /lib directory
contains any json processing util, we can avoid introducing additional
jars.
Thanks!
Best Regards,
Wei
Thanks. I may try this approach later.
Now I am using a remote cluster to test and I have this workaround:
mvn install and skip test
copy the cp to the remote cluster
mvn install with test
I doubt if it is a good approach but it works.
Best Regards,
Wei
From: Adrien Mogenet
To: user@
Hi,
I have activated shortcircuit and checksum and I would like to get a
confirmation that it's working fine.
So I have activated short circuit first and saw a 40% improvement of
the MR rowcount job. So I guess it's working fine.
Now, I'm configuring the checksum option, and I'm wondering how I
Yes of course ; CP is instantiated (and start() method is then called) when
the region is starting. When disabling/enabling the table it will force all
regions to be re-opened.
On Thu, Jan 31, 2013 at 8:10 PM, Mesika, Asaf wrote:
> Hi,
>
> If I deploy a Region Observer to a table using HDFS Jar
Hi,
If I deploy a Region Observer to a table using HDFS Jar, is it possible to
deploy a new version, enable/disable the table to get that new jar hot
deployed, without restarting all the region servers?
Asaf Mesika
Senior Developer / CSI Infrastructure Team
Office: +972 (73) 285-8769
Fax: +
On Wed, Jan 30, 2013 at 7:55 AM, kzurek wrote:
> Hi,
>
> I'm having following issues with triggering manually major compaction on
> selected regions via HBaseAdmin:
> 1. When I'm triggering major compaction on first region, which does not
> contains key, it's running normally - I see message in
If you're writing a junit test that spins up a mini cluster to test the
coprocessor, then there's no need to deploy the jar into HDFS just for
testing. The coprocessor class should already be on your test classpath.
In your test's setup method, you just need to either: a) add the
coprocessor clas
Yes, you are correct, event3 never emits for the time "10:07".
The proper result table is, as you mention:
event1 | event2
event2 | event3
event3 |
I guess i was thinking about the old example(T=7). :)
On Thu, Jan 31, 2013 at 12:39 PM, Oleg Ruchovets wrote:
> Hi Rodrigo
Hi Rodrigo ,
That is just GREAT Idea :-) !!!
But how did you get a final result:
event1 | event2, event3
event2 | event3
event3 |
I tried to simulate and didn't get event1| event2,event3
(10:03, [*after*, event1])
(10:04, [*after*, event1])
(10:05, [*after*
Hi,
The Map and Reduce steps that you mention is the same as how i though.
How should I work with this table.Should I have to scan Main table : row by
> row and for every row get event time and based on that time query second
> table?
>
> In case I will do so , i still need to execute 50 milli
I am going to disagree with ignoring the error. You will encounter
failures when doing other operations such as import/exports. The first
thing I would do is like JM said, lets focus on the region that is not in
META(we at least want 0 inconsistencies). Can you please run hbck -repair
and then r
Hi Rodrigo ,
As usual you have very intereting ! :-)
I am not sure that I understand exactly what do you mean and I try to
simulate:
Suppose we have such events in MAIN Table:
event1 | 10:07
event2 | 10:10
event3 | 10:12
Time window T=5 minutes.
===
Hi Brandon,
I faced the same issue for "HRegionInfo was null or empty" on January
24th and Ted replied:
"Encountered problems when prefetch META table:
You can ignore the warning."
So I think you should focus on the last one "not listed in META or
deployed on any region server".
Have you tried
hadoop 0.20.2-cdh3u2
hbase 0.90.4-cdh3u2
On January 8th I had a network event where I lost three region servers.
When they came back I had unassigned regions/regions not being served errors
which I fixed with the hbck -fix
Since then, however I have been getting an increasing number of these
20 matches
Mail list logo