Is it possible to upload one or is there a guide on how to upload ?
THanks
Varun
On Mon, Jul 27, 2015 at 2:24 PM, Ted Yu wrote:
> To my knowledge, there is no published maven artifact for 0.94 built
> against hadoop 2.X
>
> Cheers
>
> On Mon, Jul 27, 2015 at 2:21 PM, Varun
Hi,
Is there a maven package for HBase 0.94 built against hadoop 2.X ? I think
the default one available is built against Hadoop 1.X ?
Thanks
Varun
Sorry - wrong user mailing list - please ignore...
On Thu, Jul 31, 2014 at 12:12 PM, Varun Sharma wrote:
> Hi,
>
> I am trying to write a customized rebalancing algorithm. I would like to
> run the rebalancer every 30 minutes inside a single thread. I would also
> like to com
Hi,
I am trying to write a customized rebalancing algorithm. I would like to
run the rebalancer every 30 minutes inside a single thread. I would also
like to completely disable Helix triggering the rebalancer.
I have a few questions:
1) What's the best way to run the custom controller ? Can I sim
Hi folks,
I am wondering why we have a tiered index in the HFile format. Is it
because the root index must fit in memory - hence must be limited in size.
Does the bound on the root index pretty much dictate the index tiers ?
Thank
Varun
Seems like its called DATA_BLOCK_ENCODING, so it should only apply to data
blocks ?
On Fri, May 30, 2014 at 11:36 AM, Varun Sharma wrote:
> Hi,
>
> I have a question about prefix encodings. When we specify encoding to be
> FAST_DIFF, PREFIX, are the index/bloom filter blocks also
Hi,
I have a question about prefix encodings. When we specify encoding to be
FAST_DIFF, PREFIX, are the index/bloom filter blocks also encoded with the
same encoding. Also, are these blocks such as index/bloom blocks kept in
the encoded form inside the block cache ?
Thanks
Varun
Seems like those JIRAs are 1.0 - did not see a 0.94 version # there ?
On Wed, Apr 2, 2014 at 1:40 PM, Ted Yu wrote:
> Tianying:
> Have you seen the design doc attached to HBASE-7912 'HBase Backup/Restore
> Based on HBase Snapshot' ?
>
> Cheers
>
> >
> > > On Tue, Mar 25, 2014 at 2:38 PM, Tianyi
4, 2014 at 10:07 AM, Ted Yu wrote:
> Cycling old bits:
>
>
> http://search-hadoop.com/m/DHED4v7stT1/larger+HFile+block+size+for+very+wide+row&subj=larger+HFile+block+size+for+very+wide+row+
>
>
> On Mon, Feb 24, 2014 at 11:51 AM, Varun Sharma
> wrote:
>
> > H
Hi,
What happens if my block size is 32K while the cells are 50K. Do Hfile
blocks round up to 50K or are values split across blocks ? Also how does
this play with the block cache ?
Thanks
Varun
Actually there are 2 read aheads in linux (from what I learned last time, I
did benchmarking on random reads). One is the filesystem readahead which
linux does and then there is also a disk level read ahead which can be
modified by using the hdparm command. IIRC, there is no sure way of
removing fi
filter out columns which have a
timestamp < 2.
Varun
On Tue, Jan 28, 2014 at 9:04 AM, Varun Sharma wrote:
> Lexicographically, (ROW, COL2, T=3) should come after (ROW, COL1, T=1)
> because COL2 > COL1 lexicographically. However in the above example, it
> comes before the de
e, in terms of sort order, 'above' means before. 'below it' would mean
> after. So 'smaller' would mean before.
>
> Cheers
>
>
> On Tue, Jan 28, 2014 at 8:47 AM, Varun Sharma wrote:
>
> > Hi Ted,
> >
> > Not satisfied with your answe
:
> Varun:
> Take a look at http://hbase.apache.org/book.html#dm.sort
>
> There's no contradiction.
>
> Cheers
>
> On Jan 27, 2014, at 11:40 PM, Varun Sharma wrote:
>
> > Actually, I now have another question because of the way our work load is
> > struc
, you can grab and test them.
>
> -Vladimir
>
>
> On Mon, Jan 27, 2014 at 9:36 PM, Varun Sharma wrote:
>
> > Hi lars,
> >
> > Thanks for the background. It seems that for our case, we will have to
> > consider some solution like the Facebook one, since
ould be those of category #1 (and maybe #2) above.
>
> See: HBASE-9769, HBASE-9778, HBASE-9000 (, and HBASE-9915, maybe)
>
> I'll look at the trace a bit closers.
> So far my scan profiling has been focused on data in the blockcache since
> in the normal case the v
But continue to see reads on META - no idea why ?
On Mon, Jan 27, 2014 at 8:52 PM, Varun Sharma wrote:
> We are not seeing any balancer related logs btw anymore...
>
>
> On Mon, Jan 27, 2014 at 8:23 PM, Ted Yu wrote:
>
>> Looking at the changes since release 0.94.7, I fo
n == null)? "null": "{" + metaLocation +
> "}")
> > +
> > ", attempt=" + tries + " of " +
> > this.numRetries + " failed; retrying after sleep of " +
> >
> >
> > On Mon, Jan 27, 2014 at
Actually not sometimes but we are always seeing a large # of .META. reads
every 5 minutes.
On Mon, Jan 27, 2014 at 7:47 PM, Varun Sharma wrote:
> The default one with 0.94.7... - I dont see any of those logs. Also we
> turned off the balancer switch - but looks like sometimes we still
ame for 0.94 and 0.96):
>
> for (RegionPlan plan: plans) {
> LOG.info("balance " + plan);
>
> Do you see such log in master log ?
>
>
> On Mon, Jan 27, 2014 at 7:26 PM, Varun Sharma wrote:
>
> > We are seeing one other issue with high
We are seeing one other issue with high read latency (p99 etc.) on one of
our read heavy hbase clusters which is correlated with the balancer runs -
every 5 minutes.
If there is no balancing to do, does the balancer only scan the table every
5 minutes - does it do anything on top of that if the re
.java:4777)
org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4750)
org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2152)
org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3700)
On Sun, Jan 26, 2014 at 1:14 PM, Varun Sharma wrote:
> Hi,
>
> We are seeing
Hi,
We are seeing some unfortunately low performance in the memstore - we have
researched some of the previous JIRA(s) and seen some inefficiencies in the
ConcurrentSkipListMap. The symptom is a RegionServer hitting 100 % cpu at
weird points in time - the bug is hard to reproduce and there isn't l
Hi everyone,
I have a question about the hbase thrift server and running scans in
particular. The thrift server maintains a map of -> ResultScanner(s).
These integers are passed back to the client. Now in a typical setting
people run many thrift servers and round robin rpc(s) to them.
It seems t
e a threadlocal variable anywhere at least in the 0.94 code base ?
Also, one more thing - I see a client operation timeout in HTable beign
used as well - is that different from the rpc timeout.
Thanks !
Varun
On Mon, Nov 11, 2013 at 12:26 PM, Stack wrote:
> On Mon, Nov 11, 2013 at 12:11 PM, Varun
Hi,
Can hbase rpc timeout be changed across different HBase rpc calls for HBase
0.94. From the code, it looks like this is not possible ? I am wondering if
there is a way to fix this ?
Thanks
Varun
How many reads per second per region server are you throwing at the system
- also is 100ms the average latency ?
On Mon, Oct 7, 2013 at 2:04 PM, lars hofhansl wrote:
> He still should not see 100ms latency. 20ms, sure. 100ms seems large;
> there are still 8 machines serving the requests.
>
> I
implied by the term salt.)
>
> What you really want to do is take the hash of the key, and then truncate
> the hash. Use that instead of a salt.
>
> Much better than a salt.
>
> Sent from a remote device. Please excuse any typos...
>
> Mike Segel
>
> > On Sep 24
f WORDs at
> that
> > POSITION. If I salt the keys with a hash, the WORDs will no longer be
> > sorted, and so I would need to do a full table scan for every lookup.
> >
> > Jean-Marc : What problems do you see with my solution? Do you have a
> > better suggestion
Its better to do some "salting" in your keys for the reduce phase.
Basically, make ur key be something like "KeyHash + Key" and then decode it
in your reducer and write to HBase. This way you avoid the hotspotting
problem on HBase due to MapReduce sorting.
On Tue, Sep 24, 2013 at 2:50 PM, Jean-Ma
We, at Pinterest, are also going to stay on 0.94 for a while since it has
worked well for us and we don't have the resources to test 0.96 in the EC2
environment. That may change in the future but we don't know when...
On Wed, Sep 4, 2013 at 1:53 PM, Andrew Purtell wrote:
> If LarsH is willing t
On Tue, Jul 30, 2013 at 11:27 AM, Varun Sharma
> wrote:
>
> > JD, its a big problem. The region server holding .META has 2X the network
> > traffic and 2X the cpu load, I can easily spot the region server holding
> > .META. by just looking at the ganglia graphs of the regi
pre-fetching, set
> > hbase.client.prefetch.limit to 0
> >
> > Also, is it even causing a problem or you're just worried it might
> > since it doesn't look "normal"?
> >
> > J-D
> >
> > On Mon, Jul 29, 2013 at 10:32 AM, Varun Sha
Hi folks,
We are seeing an issue with hbase 0.94.3 on CDH 4.2.0 with excessive .META.
reads...
In the steady state where there are no client crashes and there are no
region server crashes/region movement, the server holding .META. is serving
an incredibly large # of read requests on the .META. ta
We have both c1.xlarge and hi1.4xlarge clusters at Pinterest. We have used
the following guidelines:
1) hi1.4xlarge - small data sets, random read heavy and IOPs bound - very
expensive per GB but very cheap per IOP
2) c1.xlarge/m1.xlarge - larger data sets, medium to low read load - cheap
per GB b
Hi Suraj,
One thing I have observed is that if you very high block cache churn which
happens in a ready heavy workload - a full GC eventually happens because
more block cache blocks bleed into the old generation (LRU based caching).
I have seen this happen particularly when the read load is extrem
FYI, if u disable your block cache - you will ask for "Index" blocks for
every single request. So such a high rate of request is plausible for Index
blocks even when your requests are totally random on your data.
Varun
On Mon, Jul 8, 2013 at 4:45 PM, Viral Bajaria wrote:
> Good question. When I
I mean version tracking with delete markers...
On Mon, Jul 1, 2013 at 8:17 AM, Varun Sharma wrote:
> So, yesterday, I implemented this change via a coprocessor which basically
> initiates a scan which is raw, keeps tracking of # of delete markers
> encountered and stops when a c
gt; method.
>
>
> -- Lars
>
>
> - Original Message -
> From: Varun Sharma
> To: "d...@hbase.apache.org" ; user@hbase.apache.org
> Cc:
> Sent: Sunday, June 30, 2013 1:56 PM
> Subject: Re: Issues with delete markers
>
> Sorry, typo, i meant that for use
Sorry, typo, i meant that for user scans, should we be passing delete
markers through.the filters as well ?
Varun
On Sun, Jun 30, 2013 at 1:03 PM, Varun Sharma wrote:
> For user scans, i feel we should be passing delete markers through as well.
>
>
> On Sun, Jun 30, 2013 at 12:
For user scans, i feel we should be passing delete markers through as well.
On Sun, Jun 30, 2013 at 12:35 PM, Varun Sharma wrote:
> I tried this a little bit and it seems that filters are not called on
> delete markers. For raw scans returning delete markers, does it make sense
> t
I tried this a little bit and it seems that filters are not called on
delete markers. For raw scans returning delete markers, does it make sense
to do that ?
Varun
On Sun, Jun 30, 2013 at 12:03 PM, Varun Sharma wrote:
> Hi,
>
> We are having an issue with the way HBase does ha
Hi,
We are having an issue with the way HBase does handling of deletes. We are
looking to retrieve 300 columns in a row but the row has tens of thousands
of delete markers in it before we span the 300 columns something like this
row DeleteCol1 Col1 DeleteCol2 Col2 ... DeleteCol
I was looking at HDFS 347 and the nice long story with impressive
benchmarks and that it should really help with region server performance.
The question I had was whether it would still help if we were already using
the short circuit local reads setting already provided by HBase. Are there
any oth
4:14:39 PM org.apache.hadoop.hbase.regionserver.HRegionServer
cleanup
SEVERE: Failed init
Is there a fix for this or can I disable the WAL here completely ?
Varun
On Thu, Jun 20, 2013 at 12:12 PM, Christophe Taton wrote:
> Hey Varun,
>
> On Thu, Jun 20, 2013 at 11:56 AM, Varun Sharma
> wrote:
>
> > Now that I thin
On Thu, Jun 20, 2013 at 11:10 AM, Asaf Mesika wrote:
> On Thu, Jun 20, 2013 at 7:12 PM, Varun Sharma wrote:
>
> > What is the ageOfLastShippedOp as reported on your Master region servers
> > (should be available through the /jmx) - it tells the delay your edits
> are
&g
hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
>
>
> On Tue, Jun 18, 2013 at 4:22 PM, Stack wrote:
>
> > On Tue, Jun 18, 2013 at 4:17 PM, Varun Sharma
> wrote:
> >
> > > Hi,
> > >
> > > If I wanted to write
What is the ageOfLastShippedOp as reported on your Master region servers
(should be available through the /jmx) - it tells the delay your edits are
experiencing before being shipped. If this number is < 1000 (in
milliseconds), I would say replication is doing a very good job. This is
the most impor
.
Varun
On Tue, Jun 18, 2013 at 4:22 PM, Stack wrote:
> On Tue, Jun 18, 2013 at 4:17 PM, Varun Sharma wrote:
>
> > Hi,
> >
> > If I wanted to write to write a unit test against HTable/HBase, is there
> an
> > already available utility to that for unit testing my a
Hi,
If I wanted to write to write a unit test against HTable/HBase, is there an
already available utility to that for unit testing my application logic.
I don't want to write code that either touches production or requires me to
mock an HTable. I am looking for a test htable object which behaves
Are you saying 97 % data was lost or was it offlined until the region
servers came back up ?
Varun
On Sat, Jun 1, 2013 at 6:31 PM, Jean-Marc Spaggiari wrote:
> Hi,
>
> Today I faced a power outage. 4 computers stayed up. The 3 ZK servers,
> the Master, the NN and 2 DN/RS. They was on UPS.
>
>
Hi,
I am working on some compaction coprocessors for a column family with
versions set to 1. I am using the preCompact hook to wrap a scanner around
the compaction scanner.
I wanted to know what to expect from the minor compaction output. Can I
assume the following:
1) Versions for each cell will
he 'start_replication' succeeded. I didn't patch 8207 because I'm on CDH
> > with Cloudera Manager Parcels thing and I'm still trying to figure out
> how
> > to replace their jars with mine in a clean and non intrusive way
> >
> >
> > On Th
you create a jira with logs/znode
> dump/steps to reproduce it?
>
> Thanks,
> himanshu
>
>
> On Wed, May 22, 2013 at 5:01 PM, Varun Sharma wrote:
>
> > It seems I can reproduce this - I did a few rolling restarts and got
> > screwed with NoNode exceptions - I am
sign for such problem. How deep should my rmr in zkcli (an
> > example would be most welcomed :) be ? I have no serious problem running
> > copyTable with a time period corresponding to the outage and then to
> start
> > the sync back again. One question though, how did it cau
/browse/HBASE-8207 - since you use hyphens in
our paths. One way to get back up is to delete these nodes but then you
lose data in these WAL(s)...
On Wed, May 22, 2013 at 2:22 PM, Amit Mor wrote:
> va-p-hbase-02-d,60020,1369249862401
>
>
> On Thu, May 23, 2013 at 12:20 AM, Varun Sha
Basically
ls /hbase/rs and what do you see for va-p-02-d ?
On Wed, May 22, 2013 at 2:19 PM, Varun Sharma wrote:
> Can you do ls /hbase/rs and see what you get for 02-d - instead of looking
> in /replication/, could you look in /hbase/replication/rs - I want to see
> if the times
Can you do ls /hbase/rs and see what you get for 02-d - instead of looking
in /replication/, could you look in /hbase/replication/rs - I want to see
if the timestamps are matching or not ?
Varun
On Wed, May 22, 2013 at 2:17 PM, Varun Sharma wrote:
> I see - so looks okay - there's ju
I see - so looks okay - there's just a lot of deep nesting in there - if
you look into these you nodes by doing ls - you should see a bunch of
WAL(s) which still need to be replicated...
Varun
On Wed, May 22, 2013 at 2:16 PM, Varun Sharma wrote:
> 2013-05-22 15:31:25,
PM, Amit Mor wrote:
> empty return:
>
> [zk: va-p-zookeeper-01-c:2181(CONNECTED) 10] ls
> /hbase/replication/rs/va-p-hbase-01-c,60020,1369249873379/1
> []
>
>
>
> On Thu, May 23, 2013 at 12:05 AM, Varun Sharma
> wrote:
>
> > Do an "ls" not a get her
3 at 1:46 PM, amit.mor.m...@gmail.com <
> > amit.mor.m...@gmail.com> wrote:
> >
> > > ls /hbase/replication/rs/va-p-hbase-01-c,60020,1369249873379
> > > [1]
> > > [zk: va-p-zookeeper-01-c:2181(CONNECTED) 2] ls
> > > /hbase/replication/rs/va
Also what version of HBase are you running ?
On Wed, May 22, 2013 at 1:38 PM, Varun Sharma wrote:
> Basically,
>
> You had va-p-hbase-02 crash - that caused all the replication related data
> in zookeeper to be moved to va-p-hbase-01 and have it take over for
> replicating 02
Basically,
You had va-p-hbase-02 crash - that caused all the replication related data
in zookeeper to be moved to va-p-hbase-01 and have it take over for
replicating 02's logs. Now each region server also maintains an in-memory
state of whats in ZK, it seems like when you start up 01, its trying t
So, we have a separate thread doing the recovered logs. That is good to
know. I was mostly concerned about any potential races b/w the master
renaming the log files, doing the distributed log split and doing a lease
recovery over the final file when the DN also dies. Apart from that, it
seemed to m
On Mon, May 20, 2013 at 3:54 PM, Jean-Daniel Cryans wrote:
> On Mon, May 20, 2013 at 3:48 PM, Varun Sharma wrote:
> > Thanks JD for the response... I was just wondering if issues have ever
> been
> > seen with regards to moving over a large number of WAL(s) entirely from
>
the WAL has been replicated - is it purged
immediately or soonish from the zookeeper ?
Thanks
Varun
On Mon, May 20, 2013 at 9:57 AM, Jean-Daniel Cryans wrote:
> On Mon, May 20, 2013 at 12:35 AM, Varun Sharma
> wrote:
> > Hi Lars,
> >
> > Thanks for the response.
> >
up with the deletes shipped first.
> Now imagine a compaction happens at the slave after the Deletes are
> shipped to the slave, but before the Puts are shipped... The Puts will
> reappear.
>
> -- Lars
>
>
>
>
> From: Varun Sharma
> T
Hi,
I have a couple of questions about HBase replication...
1) When we ship edits to slave cluster - do we retain the timestamps in the
edits - if we don't, I can imagine hitting some inconsistencies ?
2) When a region server fails, the master renames the directory containing
WAL(s). Does this i
concatenated key (row+col.
qualifier+timestamp) in which case, it would be difficult to run prefix
scans since prefixes could potentially bleed across row and col.
Varun
On Thu, May 16, 2013 at 11:54 PM, Michael Stack wrote:
> On Thu, May 16, 2013 at 3:26 PM, Varun Sharma wrote:
>
>>
"row1c", then thats great, though I wonder
how that would be implemented.
Varun
On Thu, May 16, 2013 at 2:55 PM, Varun Sharma wrote:
> Sorry I may have misunderstood what you meant.
>
> When you look for "row1c" in the HFile index - is it going to also match
> for "ro
look at the length of the row to grab the real portion from the
concatenated HFile key and discard all row1 entries.
Does that make my query clearer ?
On Thu, May 16, 2013 at 2:42 PM, Varun Sharma wrote:
> Nothing, I am just curious...
>
> So, we will do a bunch of wasteful sca
gt; What you seeing Varun (or think you are seeing)?
> St.Ack
>
>
> On Thu, May 16, 2013 at 2:30 PM, Stack wrote:
>
> > On Thu, May 16, 2013 at 2:03 PM, Varun Sharma
> wrote:
> >
> >> Or do we use some kind of demarcator b/w rows and columns and timestamps
&g
Or do we use some kind of demarcator b/w rows and columns and timestamps
when building the HFile keys and the indices ?
Thanks
Varun
On Thu, May 16, 2013 at 1:56 PM, Varun Sharma wrote:
> Lets say I have the following in my table:
>
> col1
> row1 v1 -
Lets say I have the following in my table:
col1
row1 v1 --> HFile entry would be "row1,col1,ts1-->v1"
ol1
row1c v2 --> HFile entry would be "row1c,ol1,ts1-->v2"
Now I issue a prefix scan asking row for row "row1c", how do we seek - do
w
Hi,
I am wondering what happens when we add the following:
row, col, timestamp --> v1
A flush happens. Now, we add
row, col, timestamp --> v2
A flush happens again. In this case if MAX_VERSIONS == 1, how is the tie
broken during reads and during minor compactions, is it arbitrary ?
Thanks
Var
Not after but only before hitting the prefix - I will check the startRow
stuff - I could not find where the seek happens for that...
On Wed, May 15, 2013 at 7:51 AM, Stack wrote:
> On Tue, May 14, 2013 at 11:33 PM, Varun Sharma
> wrote:
>
> > Hi,
> >
> > I was
Hi,
I was looking at PrefixFilter but going by the implementation - it looks we
scan every row until we hit the prefix instead of seeking to the row with
the required prefix.
I was wondering if there are more efficient alternatives which would do a
real seek rather than scanning all rows. Would s
Do you have NTP on your cluster - I have seen this manifest due to clock
skew..
Varun
On Tue, May 7, 2013 at 6:05 AM, Fabien Chung wrote:
> Hi all,
>
> i have a cluster with 8 machines (CDH4). I use an ETL (Talend) to insert
> data into hbase. Mostof time that works perfectly, but sometimes ro
Did you have the jvm error logging enabled -XX:ErrorLog or something and if
yes, did that spew anything out ?
Thanks
Varun
On Sun, May 5, 2013 at 10:18 PM, tsuna wrote:
> On Thu, May 2, 2013 at 1:49 PM, Andrew Purtell
> wrote:
> > In that blog post BenoƮt does a fair amount of showing off to
>
>
> On Thu, May 2, 2013 at 1:39 PM, Varun Sharma wrote:
>
> > I don't have one unfortunately - We did not have the -XX:ErrorLog turned
> on
> > :(
> >
> > But I did some digging following what Benoit wrote in his Blog. Basically
> > the segfault ha
java 1.6.0u38. Is this possibly too old ?
Thanks
Varun
On Thu, May 2, 2013 at 12:08 PM, Andrew Purtell wrote:
> Can you pastebin or post somewhere the entire hs_err* file?
>
>
> On Wed, May 1, 2013 at 1:54 PM, Varun Sharma wrote:
>
> > Hi,
> >
> > I am seeing
On Wed, May 1, 2013 at 1:54 PM, Varun Sharma wrote:
> Hi,
>
> I am seeing the following which is a JVM segfault:
>
> hbase-regionser[28734]: segfault at 8 ip 7f269bcc307e sp
> 7fff50f7e638 error 4 in libc-2.15.so[7f269bc51000+1b5000]
>
> Benoit Tsuna reported a
Hi,
I am seeing the following which is a JVM segfault:
hbase-regionser[28734]: segfault at 8 ip 7f269bcc307e sp
7fff50f7e638 error 4 in libc-2.15.so[7f269bc51000+1b5000]
Benoit Tsuna reported a similar issue a while back -
http://blog.tsunanet.net/2011/05/jvm-u24-segfault-in-clearerr-on-
Hi Ted, Nicholas,
Thanks for the comments. We found some issues with lease recovery and I
patched HBASE 8354 to ensure we don't see data loss. Could you please look
at HDFS 4721 and HBASE 8389 ?
Thanks
Varun
On Sat, Apr 20, 2013 at 10:52 AM, Varun Sharma wrote:
> The important thing
The important thing to note is the block for this rogue WAL is
UNDER_RECOVERY state. I have repeatedly asked HDFS dev if the stale node
thing kicks in correctly for UNDER_RECOVERY blocks but failed.
On Sat, Apr 20, 2013 at 10:47 AM, Varun Sharma wrote:
> Hi Nicholas,
>
> Rega
ment, and we don't. That's also why it takes ages. I think it's an
> AWS thing, but it brings to issue: it's slow, and, in HBase, you don't know
> if the operation could have been executed or not, so it adds complexity to
> some scenarios. If someone with enough
This is 0.94.3 hbase...
On Fri, Apr 19, 2013 at 1:09 PM, Varun Sharma wrote:
> Hi Ted,
>
> I had a long offline discussion with nicholas on this. Looks like the last
> block which was still being written too, took an enormous time to recover.
> Here's what happened.
> a)
be treated as stale node.
>
>
> On Fri, Apr 19, 2013 at 10:28 AM, Varun Sharma
> wrote:
>
> > Is there a place to upload these logs ?
> >
> >
> > On Fri, Apr 19, 2013 at 10:25 AM, Varun Sharma
> > wrote:
> >
> > > Hi Nicholas,
> >
> + * the namenode has not received heartbeat msg from a
> + * datanode for more than staleInterval (default value is
> + * {@link
> DFSConfigKeys#DFS_NAMENODE_STALE_DATANODE_INTERVAL_MILLI_DEFAULT}),
> + * the datanode will be treated as stale node.
>
>
> On Fri,
Is there a place to upload these logs ?
On Fri, Apr 19, 2013 at 10:25 AM, Varun Sharma wrote:
> Hi Nicholas,
>
> Attached are the namenode, dn logs (of one of the healthy replicas of the
> WAL block) and the rs logs which got stuch doing the log split. Action
> begins at 2
are the logs and the configuration (hdfs / hbase
> settings + cluster description). What's the failure scenario?
> From an HDFS pov, HDFS 3703 does not change the dead node status. But these
> node will be given the lowest priority when reading.
>
>
> Cheers,
>
> N
I am wondering if DFSClient caches the data node for a long period of time ?
Varun
On Thu, Apr 18, 2013 at 6:01 PM, Varun Sharma wrote:
> Hi,
>
> We are facing problems with really slow HBase region server recoveries ~
> 20 minuted. Version is hbase 0.94.3 compiled with hadoop
Hi,
We are facing problems with really slow HBase region server recoveries ~ 20
minuted. Version is hbase 0.94.3 compiled with hadoop.profile=2.0.
Hadoop version is CDH 4.2 with HDFS 3703 and HDFS 3912 patched and stale
node timeouts configured correctly. Time for dead node detection is still
10
Hi,
If I perform a full row Delete using the Delete API for a row and then
after few milliseconds, issue a Put(row, ) - will
that go through assuming that timestamps are applied in increasing order ?
Thanks
Varun
Hi,
We are observing this bug for a while when we use HTable.get() operation to
do a single Get call using the "Result get(Get get)" API and I thought its
best to bring it up.
Steps to reproduce this bug:
1) Gracefull restart a region server causing regions to get redistributed.
2) Client call to
ed Yu :
> > > Christophe:
> > > HBASE-5257 has been integrated into 0.94
> > > Can you try 0.94.6.1 to see if the problem is solved ?
> > >
> > > Writing a unit test probably is the easiest way for validation.
> > >
> > > Thanks
>
HBASE 5257 is probably what lars is talking about - that fixed a bug with
version tracking on ColumnPaginatinoFilter - there is a patch for 0.92,
0.94 and 0.96 but not for the cdh versions...
On Fri, Apr 5, 2013 at 3:28 PM, lars hofhansl wrote:
> Normally Filters are evaluated before the version
gt; applications.
>
> -n
>
> On Thu, Apr 4, 2013 at 10:31 AM, Varun Sharma wrote:
>
> > Hi,
> >
> > I am thinking of adding a string offset to ColumnPaginationFilter. There
> > are two reasons:
> >
> > 1) For deep pagination, you can seek using
Hi,
I am thinking of adding a string offset to ColumnPaginationFilter. There
are two reasons:
1) For deep pagination, you can seek using SEEK_NEXT_USING_HINT.
2) For correctness reasons, this approach is better if the list of columns
is mutation. Lets say you get 1st 50 columns using the current
Hey folks,
I was wondering what kind of GC times do people see (preferably on ec2).
Such as what is the typical time to collect 256M new generation on an X
core machine. We are seeing a pause time of ~50 milliseconds on a c1.xlarge
machine for 256M - this has 8 virtual cores. Is that typical ?
Th
1 - 100 of 192 matches
Mail list logo